threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi hackers,\n\nPartitioning is necessary for very large tables.\n However, I found that postgresql does not support create index concurrently on partitioned tables.\nThe document show that we need to create an index on each partition individually and then finally create the partitioned index non-concurrently. \nThis is undoubtedly a complex operation for DBA, especially when there are many partitions. \n\nTherefore, I wonder why pg does not support concurrent index creation on partitioned tables? \nWhat are the difficulties of this function? \nIf I want to implement it, what should I pay attention?\n\nSincerely look forward to your reply. \n\nRegards & Thanks Adger\nHi hackers, Partitioning is necessary for very large tables. However, I found that postgresql does not support create index concurrently on partitioned tables.The document show that we need to create an index on each partition individually and then finally create the partitioned index non-concurrently. This is undoubtedly a complex operation for DBA, especially when there are many partitions. Therefore, I wonder why pg does not support concurrent index creation on partitioned tables? What are the difficulties of this function? If I want to implement it, what should I pay attention?Sincerely look forward to your reply. Regards & Thanks Adger",
"msg_date": "Wed, 03 Jun 2020 20:22:29 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?aG93IHRvIGNyZWF0ZSBpbmRleCBjb25jdXJyZW50bHkgb24gcGFyaXRpb25lZCB0YWJsZQ==?="
},
{
"msg_contents": "On Wed, Jun 03, 2020 at 08:22:29PM +0800, 李杰(慎追) wrote:\n> Partitioning is necessary for very large tables.\n> However, I found that postgresql does not support create index concurrently on partitioned tables.\n> The document show that we need to create an index on each partition individually and then finally create the partitioned index non-concurrently. \n> This is undoubtedly a complex operation for DBA, especially when there are many partitions. \n\n> Therefore, I wonder why pg does not support concurrent index creation on partitioned tables? \n> What are the difficulties of this function? \n> If I want to implement it, what should I pay attention?\n\nMaybe I'm wrong, but I don't think there's any known difficulty - just that\nnobody did it yet. You should pay attention to what happens on error, but\nhopefully you wouldn't need to add much code and can rely on existing code to\npaths to handle that right.\n\nI think you'd look at the commits and code implementing indexes on partitioned\ntables and CREATE INDEX CONCURRENTLY. And maybe any following commits with\nfixes.\n\nYou'd first loop around all children (recursively if there are partitions which\nare themselves partitioned) and create indexes concurrently. \n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 6 Jun 2020 09:23:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: how to create index concurrently on paritioned table"
},
{
"msg_contents": "On Sat, Jun 06, 2020 at 09:23:32AM -0500, Justin Pryzby wrote:\n> On Wed, Jun 03, 2020 at 08:22:29PM +0800, 李杰(慎追) wrote:\n> > Partitioning is necessary for very large tables.\n> > However, I found that postgresql does not support create index concurrently on partitioned tables.\n> > The document show that we need to create an index on each partition individually and then finally create the partitioned index non-concurrently. \n> > This is undoubtedly a complex operation for DBA, especially when there are many partitions. \n> \n> > Therefore, I wonder why pg does not support concurrent index creation on partitioned tables? \n> > What are the difficulties of this function? \n> > If I want to implement it, what should I pay attention?\n> \n> Maybe I'm wrong, but I don't think there's any known difficulty - just that\n> nobody did it yet.\n\nI said that but I was actually thinking about the code for \"REINDEX\nCONCURRENTLY\" (which should also handle partitioned tables).\n\nI looked at CIC now and came up with the attached. All that's needed to allow\nthis case is to close the relation before recursing to partitions - it needs to\nbe closed before calling CommitTransactionCommand(). There's probably a better\nway to write this, but I can't see that there's anything complicated about\nhandling partitioned tables.\n\n-- \nJustin",
"msg_date": "Sun, 7 Jun 2020 13:04:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: how to create index concurrently on paritioned table"
},
{
"msg_contents": "On Sun, Jun 07, 2020 at 01:04:48PM -0500, Justin Pryzby wrote:\n> On Sat, Jun 06, 2020 at 09:23:32AM -0500, Justin Pryzby wrote:\n> > On Wed, Jun 03, 2020 at 08:22:29PM +0800, 李杰(慎追) wrote:\n> > > Partitioning is necessary for very large tables. However, I found that\n> > > postgresql does not support create index concurrently on partitioned\n> > > tables. The document show that we need to create an index on each\n> > > partition individually and then finally create the partitioned index\n> > > non-concurrently. This is undoubtedly a complex operation for DBA,\n> > > especially when there are many partitions. \n\nI added functionality for C-I-C, REINDEX-CONCURRENTLY, and CLUSTER of\npartitioned tables. We already recursively handle VACUUM and ANALYZE since\nv10.\n\nAnd added here:\nhttps://commitfest.postgresql.org/28/2584/\n\nAdger, if you're familiar with compilation and patching, do you want to try the\npatch ?\n\nNote, you could do this now using psql like:\nSELECT format('CREATE INDEX CONCURRENTLY ... ON %s(col)', a::regclass) FROM pg_partition_ancestors() AS a;\n\\gexec\n\n-- \nJustin",
"msg_date": "Thu, 11 Jun 2020 10:35:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: how to create index concurrently on partitioned table"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 10:35:02AM -0500, Justin Pryzby wrote:\n> Note, you could do this now using psql like:\n> SELECT format('CREATE INDEX CONCURRENTLY ... ON %s(col)', a::regclass) FROM pg_partition_ancestors() AS a;\n> \\gexec\n\nI have skimmed quickly through the patch set, and something has caught\nmy attention.\n\n> drop table idxpart;\n> --- Some unsupported features\n> +-- CIC on partitioned table\n> create table idxpart (a int, b int, c text) partition by range (a);\n> create table idxpart1 partition of idxpart for values from (0) to (10);\n> create index concurrently on idxpart (a);\n> -ERROR: cannot create index on partitioned table \"idxpart\" concurrently\n> +\\d idxpart1\n\nWhen it comes to test behaviors specific to partitioning, there are in\nmy experience three things to be careful about and stress in the tests:\n- Use at least two layers of partitioning.\n- Include into the partition tree a partition that has no leaf\npartitions.\n- Test the commands on the top-most parent, a member in the middle of\nthe partition tree, the partition with no leaves, and one leaf, making\nsure that relfilenode changes where it should and that partition trees\nremain intact (you can use pg_partition_tree() for that.)\n\nThat's to say that the amount of regression tests added here is not\nsufficient in my opinion.\n--\nMichael",
"msg_date": "Fri, 12 Jun 2020 16:20:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: how to create index concurrently on partitioned table"
},
{
"msg_contents": ">On Sat, Jun 06, 2020 at 09:23:32AM -0500, Justin Pryzby wrote:\n> > On Wed, Jun 03, 2020 at 08:22:29PM +0800, 李杰(慎追) wrote:\n> > > Partitioning is necessary for very large tables.\n> > > However, I found that postgresql does not support create index concurrently on partitioned tables.\n> > > The document show that we need to create an index on each partition individually and then finally create the partitioned index non-concurrently. \n> > > This is undoubtedly a complex operation for DBA, especially when there are many partitions. \n> > \n> > > Therefore, I wonder why pg does not support concurrent index creation on partitioned tables? \n> > > What are the difficulties of this function? \n> > > If I want to implement it, what should I pay attention?\n> > \n> > Maybe I'm wrong, but I don't think there's any known difficulty - just that\n> > nobody did it yet.\n\n> I said that but I was actually thinking about the code for \"REINDEX\n> CONCURRENTLY\" (which should also handle partitioned tables).\n\n> I looked at CIC now and came up with the attached. All that's needed to allow\n> this case is to close the relation before recursing to partitions - it needs to\n> be closed before calling CommitTransactionCommand(). There's probably a better\n> way to write this, but I can't see that there's anything complicated about\n> handling partitioned tables.\n\nHi, Justin Pryzby\n\nI'm so sorry about getting back late.\nThank you very much for helping me consider this issue.\nI compiled the patch v1 you provided. And I patch v2-001 again to enter postgresql.\nI got a coredump that was easy to reproduce. As follows:\n\n#0 PopActiveSnapshot () at snapmgr.c:822\n#1 0x00000000005ca687 in DefineIndex (relationId=relationId@entry=16400, \nstmt=stmt@entry=0x1aa5e28, indexRelationId=16408, indexRelationId@entry=0, \nparentIndexId=parentIndexId@entry=16406,\n parentConstraintId=0, is_alter_table=is_alter_table@entry=false,\n check_rights=true, check_not_in_use=true, skip_build=false, quiet=false) \nat indexcmds.c:1426\n#2 0x00000000005ca5ab in DefineIndex (relationId=relationId@entry=16384, \nstmt=stmt@entry=0x1b35278, indexRelationId=16406, indexRelationId@entry=0,\n parentIndexId=parentIndexId@entry=0,\n parentConstraintId=parentConstraintId@entry=0, is_alter_table=\nis_alter_table@entry=false, check_rights=true, check_not_in_use=true,\n skip_build=false, quiet=false) at indexcmds.c:1329\n#3 0x000000000076bf80 in ProcessUtilitySlow (pstate=pstate@entry=0x1b350c8, \npstmt=pstmt@entry=0x1a2bf40,\n queryString=queryString@entry=0x1a2b2c8 \"create index CONCURRENTLY \nidxpart_a_idx on idxpart (a);\", context=context@entry=PROCESS_UTILITY_TOPLEVEL, \nparams=params@entry=0x0,\n queryEnv=queryEnv@entry=0x0, qc=0x7ffc86cc7630, dest=0x1a2c200) at \nutility.c:1474\n#4 0x000000000076afeb in standard_ProcessUtility (pstmt=0x1a2bf40, \nqueryString=0x1a2b2c8 \"create index CONCURRENTLY idxpart_a_idx on idxpart (a);\",\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n queryEnv=0x0, dest=0x1a2c200, qc=0x7ffc86cc7630) at utility.c:1069\n#5 0x0000000000768992 in PortalRunUtility (portal=0x1a8d1f8, pstmt=0x1a2bf40, \nisTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=<optimized out>,\n qc=0x7ffc86cc7630) at pquery.c:1157\n#6 0x00000000007693f3 in PortalRunMulti (portal=portal@entry=0x1a8d1f8, \nisTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, \ndest=dest@entry=0x1a2c200,\n altdest=altdest@entry=0x1a2c200, qc=qc@entry=0x7ffc86cc7630) at pquery.c:1310\n#7 0x0000000000769ed3 in PortalRun (portal=portal@entry=0x1a8d1f8, count=count\n@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once\n@entry=true, dest=dest@entry=0x1a2c200,\n altdest=altdest@entry=0x1a2c200, qc=0x7ffc86cc7630) at pquery.c:779\n#8 0x0000000000765b06 in exec_simple_query (query_string=0x1a2b2c8 \"create \nindex CONCURRENTLY idxpart_a_idx on idxpart (a);\") at postgres.c:1239\n#9 0x0000000000767de5 in PostgresMain (argc=<optimized out>, argv=argv@entry\n=0x1a552c8, dbname=<optimized out>, username=<optimized out>) at postgres.c:4315\n#10 0x00000000006f2b23 in BackendRun (port=0x1a4d1e0, port=0x1a4d1e0) at postmaster.c:4523\n#11 BackendStartup (port=0x1a4d1e0) at postmaster.c:4215\n#12 ServerLoop () at postmaster.c:1727\n#13 0x00000000006f3a1f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1a25ea0) \nat postmaster.c:1400\n#14 0x00000000004857f9 in main (argc=3, argv=0x1a25ea0) at main.c:210\n\n\nYou can re-produce it like this:\n```\ncreate table idxpart (a int, b int, c text) partition by range (a);\ncreate table idxpart1 partition of idxpart for values from (0) to (10);\ncreate table idxpart2 partition of idxpart for values from (10) to (20);\ncreate index CONCURRENTLY idxpart_a_idx on idxpart (a);\n\n````\nI have been trying to get familiar with the source code of create index.\nCan you solve this bug first? I will try my best to implement CIC with you.\nNext, I will read your patchs v2-002 and v2-003.\n\nThank you very much,\n Regards, Adger\n\n\n\n\n>On Sat, Jun 06, 2020 at 09:23:32AM -0500, Justin Pryzby wrote:> > On Wed, Jun 03, 2020 at 08:22:29PM +0800, 李杰(慎追) wrote:> > > Partitioning is necessary for very large tables.> > > However, I found that postgresql does not support create index concurrently on partitioned tables.> > > The document show that we need to create an index on each partition individually and then finally create the partitioned index non-concurrently. > > > This is undoubtedly a complex operation for DBA, especially when there are many partitions. > > > > > Therefore, I wonder why pg does not support concurrent index creation on partitioned tables? > > > What are the difficulties of this function? > > > If I want to implement it, what should I pay attention?> > > > Maybe I'm wrong, but I don't think there's any known difficulty - just that> > nobody did it yet.> I said that but I was actually thinking about the code for \"REINDEX> CONCURRENTLY\" (which should also handle partitioned tables).> I looked at CIC now and came up with the attached. All that's needed to allow> this case is to close the relation before recursing to partitions - it needs to> be closed before calling CommitTransactionCommand(). There's probably a better> way to write this, but I can't see that there's anything complicated about> handling partitioned tables.Hi, Justin PryzbyI'm so sorry about getting back late.Thank you very much for helping me consider this issue.I compiled the patch v1 you provided. And I patch v2-001 again to enter postgresql.I got a coredump that was easy to reproduce. As follows:#0 PopActiveSnapshot () at snapmgr.c:822#1 0x00000000005ca687 in DefineIndex (relationId=relationId@entry=16400, stmt=stmt@entry=0x1aa5e28, indexRelationId=16408, indexRelationId@entry=0, parentIndexId=parentIndexId@entry=16406, parentConstraintId=0, is_alter_table=is_alter_table@entry=false, check_rights=true, check_not_in_use=true, skip_build=false, quiet=false) at indexcmds.c:1426#2 0x00000000005ca5ab in DefineIndex (relationId=relationId@entry=16384, stmt=stmt@entry=0x1b35278, indexRelationId=16406, indexRelationId@entry=0, parentIndexId=parentIndexId@entry=0, parentConstraintId=parentConstraintId@entry=0, is_alter_table=is_alter_table@entry=false, check_rights=true, check_not_in_use=true, skip_build=false, quiet=false) at indexcmds.c:1329#3 0x000000000076bf80 in ProcessUtilitySlow (pstate=pstate@entry=0x1b350c8, pstmt=pstmt@entry=0x1a2bf40, queryString=queryString@entry=0x1a2b2c8 \"create index CONCURRENTLY idxpart_a_idx on idxpart (a);\", context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, qc=0x7ffc86cc7630, dest=0x1a2c200) at utility.c:1474#4 0x000000000076afeb in standard_ProcessUtility (pstmt=0x1a2bf40, queryString=0x1a2b2c8 \"create index CONCURRENTLY idxpart_a_idx on idxpart (a);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1a2c200, qc=0x7ffc86cc7630) at utility.c:1069#5 0x0000000000768992 in PortalRunUtility (portal=0x1a8d1f8, pstmt=0x1a2bf40, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=<optimized out>, qc=0x7ffc86cc7630) at pquery.c:1157#6 0x00000000007693f3 in PortalRunMulti (portal=portal@entry=0x1a8d1f8, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x1a2c200, altdest=altdest@entry=0x1a2c200, qc=qc@entry=0x7ffc86cc7630) at pquery.c:1310#7 0x0000000000769ed3 in PortalRun (portal=portal@entry=0x1a8d1f8, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x1a2c200, altdest=altdest@entry=0x1a2c200, qc=0x7ffc86cc7630) at pquery.c:779#8 0x0000000000765b06 in exec_simple_query (query_string=0x1a2b2c8 \"create index CONCURRENTLY idxpart_a_idx on idxpart (a);\") at postgres.c:1239#9 0x0000000000767de5 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x1a552c8, dbname=<optimized out>, username=<optimized out>) at postgres.c:4315#10 0x00000000006f2b23 in BackendRun (port=0x1a4d1e0, port=0x1a4d1e0) at postmaster.c:4523#11 BackendStartup (port=0x1a4d1e0) at postmaster.c:4215#12 ServerLoop () at postmaster.c:1727#13 0x00000000006f3a1f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1a25ea0) at postmaster.c:1400#14 0x00000000004857f9 in main (argc=3, argv=0x1a25ea0) at main.c:210You can re-produce it like this:```create table idxpart (a int, b int, c text) partition by range (a);create table idxpart1 partition of idxpart for values from (0) to (10);create table idxpart2 partition of idxpart for values from (10) to (20);create index CONCURRENTLY idxpart_a_idx on idxpart (a);````I have been trying to get familiar with the source code of create index.Can you solve this bug first? I will try my best to implement CIC with you.Next, I will read your patchs v2-002 and v2-003.Thank you very much, Regards, Adger",
"msg_date": "Fri, 12 Jun 2020 16:06:28 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaaG93IHRvIGNyZWF0ZSBpbmRleCBjb25jdXJyZW50bHkgb24gcGFyaXRpb25l?=\n =?UTF-8?B?ZCB0YWJsZQ==?="
},
{
"msg_contents": "Hi Justin,\n\n> Maybe I'm wrong, but I don't think there's any known difficulty - just that> nobody did it yet. You should pay attention to what happens on error, but\n> hopefully you wouldn't need to add much code and can rely on existing code to\n> paths to handle that right.\n\nyes, I am trying to learn the code of index definition.\n\n> I think you'd look at the commits and code implementing indexes on partitioned\n> tables and CREATE INDEX CONCURRENTLY. And maybe any following commits with\n> fixes.\n\n> You'd first loop around all children (recursively if there are partitions which\n> are themselves partitioned) and create indexes concurrently. \n\nAs we all know, CIC has three transactions. If we recursively in n partitioned tables, \nit will become 3N transactions. If an error occurs in these transactions, we have too many things to deal...\n\nIf an error occurs when an index is created in one of the partitions, \nwhat should we do with our new index?\n\n\nThank you very much,\n Regards, Adger\n\n\n\n\n\n\n\nHi Justin,> Maybe I'm wrong, but I don't think there's any known difficulty - just that> nobody did it yet. You should pay attention to what happens on error, but> hopefully you wouldn't need to add much code and can rely on existing code to> paths to handle that right.yes, I am trying to learn the code of index definition.> I think you'd look at the commits and code implementing indexes on partitioned> tables and CREATE INDEX CONCURRENTLY. And maybe any following commits with> fixes.> You'd first loop around all children (recursively if there are partitions which> are themselves partitioned) and create indexes concurrently. As we all know, CIC has three transactions. If we recursively in n partitioned tables, it will become 3N transactions. If an error occurs in these transactions, we have too many things to deal... If an error occurs when an index is created in one of the partitions, what should we do with our new index?Thank you very much, Regards, Adger",
"msg_date": "Fri, 12 Jun 2020 16:17:34 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaaG93IHRvIGNyZWF0ZSBpbmRleCBjb25jdXJyZW50bHkgb24gcGFyaXRpb25l?=\n =?UTF-8?B?ZCB0YWJsZQ==?="
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 04:17:34PM +0800, 李杰(慎追) wrote:\n> As we all know, CIC has three transactions. If we recursively in n partitioned tables, \n> it will become 3N transactions. If an error occurs in these transactions, we have too many things to deal...\n> \n> If an error occurs when an index is created in one of the partitions, \n> what should we do with our new index?\n\nMy (tentative) understanding is that these types of things should use a\n\"subtransaction\" internally.. So if the toplevel transaction rolls back, its\nchanges are lost. In some cases, it might be desirable to not roll back, in\nwhich case the user(client) should first create indexes (concurrently if\nneeded) on every child, and then later create index on parent (that has the\nadvtantage of working on older servers, too).\n\npostgres=# SET client_min_messages=debug;\npostgres=# CREATE INDEX ON t(i);\nDEBUG: building index \"t1_i_idx\" on table \"t1\" with request for 1 parallel worker\nDEBUG: index \"t1_i_idx\" can safely use deduplication\nDEBUG: creating and filling new WAL file\nDEBUG: done creating and filling new WAL file\nDEBUG: creating and filling new WAL file\nDEBUG: done creating and filling new WAL file\nDEBUG: building index \"t2_i_idx\" on table \"t2\" with request for 1 parallel worker\n^C2020-06-12 13:08:17.001 CDT [19291] ERROR: canceling statement due to user request\n2020-06-12 13:08:17.001 CDT [19291] STATEMENT: CREATE INDEX ON t(i);\n2020-06-12 13:08:17.001 CDT [27410] FATAL: terminating connection due to administrator command\n2020-06-12 13:08:17.001 CDT [27410] STATEMENT: CREATE INDEX ON t(i);\nCancel request sent\n\nIf the index creation is interrupted at this point, no indexes will exist.\n\nOn Fri, Jun 12, 2020 at 04:06:28PM +0800, 李杰(慎追) wrote:\n> >On Sat, Jun 06, 2020 at 09:23:32AM -0500, Justin Pryzby wrote:\n> > I looked at CIC now and came up with the attached. All that's needed to allow\n> > this case is to close the relation before recursing to partitions - it needs to\n> > be closed before calling CommitTransactionCommand(). There's probably a better\n> > way to write this, but I can't see that there's anything complicated about\n> > handling partitioned tables.\n> \n> I'm so sorry about getting back late.\n> Thank you very much for helping me consider this issue.\n> I compiled the patch v1 you provided. And I patch v2-001 again to enter postgresql.\n> I got a coredump that was easy to reproduce. As follows:\n\n> I have been trying to get familiar with the source code of create index.\n> Can you solve this bug first? I will try my best to implement CIC with you.\n> Next, I will read your patchs v2-002 and v2-003.\n\nThanks, fixed.\n\nOn Fri, Jun 12, 2020 at 04:20:17PM +0900, Michael Paquier wrote:\n> When it comes to test behaviors specific to partitioning, there are in\n> my experience three things to be careful about and stress in the tests:\n> - Use at least two layers of partitioning.\n> - Include into the partition tree a partition that has no leaf\n> partitions.\n> - Test the commands on the top-most parent, a member in the middle of\n> the partition tree, the partition with no leaves, and one leaf, making\n> sure that relfilenode changes where it should and that partition trees\n> remain intact (you can use pg_partition_tree() for that.)\n\nAdded, thanks for looking.\n\n-- \nJustin",
"msg_date": "Fri, 12 Jun 2020 13:15:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: how to create index concurrently on partitioned table"
},
{
"msg_contents": "> My (tentative) understanding is that these types of things should use a\n> \"subtransaction\" internally.. So if the toplevel transaction rolls back, its\n> changes are lost. In some cases, it might be desirable to not roll back, in\n> which case the user(client) should first create indexes (concurrently if\n> needed) on every child, and then later create index on parent (that has the\n> advtantage of working on older servers, too).\n\nHi Justin, \nI have a case here, you see if it meets your expectations.\n\n`````\npostgres=# CREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a);\nCREATE TABLE\npostgres=# CREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (25);\nCREATE TABLE\npostgres=# CREATE TABLE prt1_p2 PARTITION OF prt1 FOR VALUES FROM (25) TO (50);\nCREATE TABLE\npostgres=# CREATE TABLE prt1_p3 PARTITION OF prt1 FOR VALUES FROM (50) TO (75);\nCREATE TABLE\npostgres=# INSERT INTO prt1 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 74) i;\nINSERT 0 75\npostgres=# insert into prt1 values (26,1,'FM0026');\nINSERT 0 1\npostgres=# create unique index CONCURRENTLY idexpart_cic on prt1 (a);\nERROR: could not create unique index \"prt1_p2_a_idx\"\nDETAIL: Key (a)=(26) is duplicated.\npostgres=# \\d+ prt1\n Partitioned table \"public.prt1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain | |\n b | integer | | | | plain | |\n c | character varying | | | | extended | |\nPartition key: RANGE (a)\nIndexes:\n \"idexpart_cic\" UNIQUE, btree (a) INVALID\nPartitions: prt1_p1 FOR VALUES FROM (0) TO (25),\n prt1_p2 FOR VALUES FROM (25) TO (50),\n prt1_p3 FOR VALUES FROM (50) TO (75)\n\npostgres=# \\d+ prt1_p1\n Table \"public.prt1_p1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain | |\n b | integer | | | | plain | |\n c | character varying | | | | extended | |\nPartition of: prt1 FOR VALUES FROM (0) TO (25)\nPartition constraint: ((a IS NOT NULL) AND (a >= 0) AND (a < 25))\nIndexes:\n \"prt1_p1_a_idx\" UNIQUE, btree (a)\nAccess method: heap\n\npostgres=# \\d+ prt1_p2\n Table \"public.prt1_p2\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain | |\n b | integer | | | | plain | |\n c | character varying | | | | extended | |\nPartition of: prt1 FOR VALUES FROM (25) TO (50)\nPartition constraint: ((a IS NOT NULL) AND (a >= 25) AND (a < 50))\nIndexes:\n \"prt1_p2_a_idx\" UNIQUE, btree (a) INVALID\nAccess method: heap\n\npostgres=# \\d+ prt1_p3\n Table \"public.prt1_p3\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain | |\n b | integer | | | | plain | |\n c | character varying | | | | extended | |\nPartition of: prt1 FOR VALUES FROM (50) TO (75)\nPartition constraint: ((a IS NOT NULL) AND (a >= 50) AND (a < 75))\nAccess method: heap\n```````\nAs shown above, an error occurred while creating an index in the second partition. \nIt can be clearly seen that the index of the partitioned table is invalid \nand the index of the first partition is normal, the second partition is invalid, \nand the Third Partition index does not exist at all.\n\nBut we do the following tests again:\n```\npostgres=# truncate table prt1;\nTRUNCATE TABLE\npostgres=# INSERT INTO prt1 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 74) i;\nINSERT 0 75\npostgres=# insert into prt1 values (51,1,'FM0051');\nINSERT 0 1\npostgres=# drop index idexpart_cic;\nDROP INDEX\npostgres=# create unique index CONCURRENTLY idexpart_cic on prt1 (a);\nERROR: could not create unique index \"prt1_p3_a_idx\"\nDETAIL: Key (a)=(51) is duplicated.\npostgres=# \\d+ prt1\n Partitioned table \"public.prt1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain | |\n b | integer | | | | plain | |\n c | character varying | | | | extended | |\nPartition key: RANGE (a)\nIndexes:\n \"idexpart_cic\" UNIQUE, btree (a) INVALID\nPartitions: prt1_p1 FOR VALUES FROM (0) TO (25),\n prt1_p2 FOR VALUES FROM (25) TO (50),\n prt1_p3 FOR VALUES FROM (50) TO (75)\n\npostgres=# \\d+ prt1_p1\n Table \"public.prt1_p1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain | |\n b | integer | | | | plain | |\n c | character varying | | | | extended | |\nPartition of: prt1 FOR VALUES FROM (0) TO (25)\nPartition constraint: ((a IS NOT NULL) AND (a >= 0) AND (a < 25))\nIndexes:\n \"prt1_p1_a_idx\" UNIQUE, btree (a)\nAccess method: heap\n\npostgres=# \\d+ prt1_p2\n Table \"public.prt1_p2\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain | |\n b | integer | | | | plain | |\n c | character varying | | | | extended | |\nPartition of: prt1 FOR VALUES FROM (25) TO (50)\nPartition constraint: ((a IS NOT NULL) AND (a >= 25) AND (a < 50))\nIndexes:\n \"prt1_p2_a_idx\" UNIQUE, btree (a)\nAccess method: heap\n\npostgres=# \\d+ prt1_p3\n Table \"public.prt1_p3\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain | |\n b | integer | | | | plain | |\n c | character varying | | | | extended | |\nPartition of: prt1 FOR VALUES FROM (50) TO (75)\nPartition constraint: ((a IS NOT NULL) AND (a >= 50) AND (a < 75))\nAccess method: heap\n```\n\nNow we can see that the first two partitions have indexes, \nbut the third partition has no indexes due to an error. \nTherefore, in our first case, it should not be what we expected that the third partition has no index.\nThat is to say, when our CIC goes wrong, either roll back all or go down instead of stopping in the middle.\nThis is my shallow opinion, please take it as your reference.\n\nThank you very much,\n Regards, Adger\n\n\n\n------------------------------------------------------------------\n发件人:Justin Pryzby <pryzby@telsasoft.com>\n发送时间:2020年6月13日(星期六) 02:15\n收件人:Michael Paquier <michael@paquier.xyz>; 李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>\n主 题:Re: how to create index concurrently on partitioned table\n\nOn Fri, Jun 12, 2020 at 04:17:34PM +0800, 李杰(慎追) wrote:\n> As we all know, CIC has three transactions. If we recursively in n partitioned tables, \n> it will become 3N transactions. If an error occurs in these transactions, we have too many things to deal...\n> \n> If an error occurs when an index is created in one of the partitions, \n> what should we do with our new index?\n\nMy (tentative) understanding is that these types of things should use a\n\"subtransaction\" internally.. So if the toplevel transaction rolls back, its\nchanges are lost. In some cases, it might be desirable to not roll back, in\nwhich case the user(client) should first create indexes (concurrently if\nneeded) on every child, and then later create index on parent (that has the\nadvtantage of working on older servers, too).\n\npostgres=# SET client_min_messages=debug;\npostgres=# CREATE INDEX ON t(i);\nDEBUG: building index \"t1_i_idx\" on table \"t1\" with request for 1 parallel worker\nDEBUG: index \"t1_i_idx\" can safely use deduplication\nDEBUG: creating and filling new WAL file\nDEBUG: done creating and filling new WAL file\nDEBUG: creating and filling new WAL file\nDEBUG: done creating and filling new WAL file\nDEBUG: building index \"t2_i_idx\" on table \"t2\" with request for 1 parallel worker\n^C2020-06-12 13:08:17.001 CDT [19291] ERROR: canceling statement due to user request\n2020-06-12 13:08:17.001 CDT [19291] STATEMENT: CREATE INDEX ON t(i);\n2020-06-12 13:08:17.001 CDT [27410] FATAL: terminating connection due to administrator command\n2020-06-12 13:08:17.001 CDT [27410] STATEMENT: CREATE INDEX ON t(i);\nCancel request sent\n\nIf the index creation is interrupted at this point, no indexes will exist.\n\nOn Fri, Jun 12, 2020 at 04:06:28PM +0800, 李杰(慎追) wrote:\n> >On Sat, Jun 06, 2020 at 09:23:32AM -0500, Justin Pryzby wrote:\n> > I looked at CIC now and came up with the attached. All that's needed to allow\n> > this case is to close the relation before recursing to partitions - it needs to\n> > be closed before calling CommitTransactionCommand(). There's probably a better\n> > way to write this, but I can't see that there's anything complicated about\n> > handling partitioned tables.\n> \n> I'm so sorry about getting back late.\n> Thank you very much for helping me consider this issue.\n> I compiled the patch v1 you provided. And I patch v2-001 again to enter postgresql.\n> I got a coredump that was easy to reproduce. As follows:\n\n> I have been trying to get familiar with the source code of create index.\n> Can you solve this bug first? I will try my best to implement CIC with you.\n> Next, I will read your patchs v2-002 and v2-003.\n\nThanks, fixed.\n\nOn Fri, Jun 12, 2020 at 04:20:17PM +0900, Michael Paquier wrote:\n> When it comes to test behaviors specific to partitioning, there are in\n> my experience three things to be careful about and stress in the tests:\n> - Use at least two layers of partitioning.\n> - Include into the partition tree a partition that has no leaf\n> partitions.\n> - Test the commands on the top-most parent, a member in the middle of\n> the partition tree, the partition with no leaves, and one leaf, making\n> sure that relfilenode changes where it should and that partition trees\n> remain intact (you can use pg_partition_tree() for that.)\n\nAdded, thanks for looking.\n\n-- \nJustin\n\n> My (tentative) understanding is that these types of things should use a> \"subtransaction\" internally.. So if the toplevel transaction rolls back, its> changes are lost. In some cases, it might be desirable to not roll back, in> which case the user(client) should first create indexes (concurrently if> needed) on every child, and then later create index on parent (that has the> advtantage of working on older servers, too).Hi Justin, I have a case here, you see if it meets your expectations.`````postgres=# CREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a);CREATE TABLEpostgres=# CREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (25);CREATE TABLEpostgres=# CREATE TABLE prt1_p2 PARTITION OF prt1 FOR VALUES FROM (25) TO (50);CREATE TABLEpostgres=# CREATE TABLE prt1_p3 PARTITION OF prt1 FOR VALUES FROM (50) TO (75);CREATE TABLEpostgres=# INSERT INTO prt1 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 74) i;INSERT 0 75postgres=# insert into prt1 values (26,1,'FM0026');INSERT 0 1postgres=# create unique index CONCURRENTLY idexpart_cic on prt1 (a);ERROR: could not create unique index \"prt1_p2_a_idx\"DETAIL: Key (a)=(26) is duplicated.postgres=# \\d+ prt1 Partitioned table \"public.prt1\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+-------------------+-----------+----------+---------+----------+--------------+------------- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition key: RANGE (a)Indexes: \"idexpart_cic\" UNIQUE, btree (a) INVALIDPartitions: prt1_p1 FOR VALUES FROM (0) TO (25), prt1_p2 FOR VALUES FROM (25) TO (50), prt1_p3 FOR VALUES FROM (50) TO (75)postgres=# \\d+ prt1_p1 Table \"public.prt1_p1\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+-------------------+-----------+----------+---------+----------+--------------+------------- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition of: prt1 FOR VALUES FROM (0) TO (25)Partition constraint: ((a IS NOT NULL) AND (a >= 0) AND (a < 25))Indexes: \"prt1_p1_a_idx\" UNIQUE, btree (a)Access method: heappostgres=# \\d+ prt1_p2 Table \"public.prt1_p2\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+-------------------+-----------+----------+---------+----------+--------------+------------- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition of: prt1 FOR VALUES FROM (25) TO (50)Partition constraint: ((a IS NOT NULL) AND (a >= 25) AND (a < 50))Indexes: \"prt1_p2_a_idx\" UNIQUE, btree (a) INVALIDAccess method: heappostgres=# \\d+ prt1_p3 Table \"public.prt1_p3\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+-------------------+-----------+----------+---------+----------+--------------+------------- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition of: prt1 FOR VALUES FROM (50) TO (75)Partition constraint: ((a IS NOT NULL) AND (a >= 50) AND (a < 75))Access method: heap```````As shown above, an error occurred while creating an index in the second partition. It can be clearly seen that the index of the partitioned table is invalid and the index of the first partition is normal, the second partition is invalid, and the Third Partition index does not exist at all.But we do the following tests again:```postgres=# truncate table prt1;TRUNCATE TABLEpostgres=# INSERT INTO prt1 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 74) i;INSERT 0 75postgres=# insert into prt1 values (51,1,'FM0051');INSERT 0 1postgres=# drop index idexpart_cic;DROP INDEXpostgres=# create unique index CONCURRENTLY idexpart_cic on prt1 (a);ERROR: could not create unique index \"prt1_p3_a_idx\"DETAIL: Key (a)=(51) is duplicated.postgres=# \\d+ prt1 Partitioned table \"public.prt1\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+-------------------+-----------+----------+---------+----------+--------------+------------- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition key: RANGE (a)Indexes: \"idexpart_cic\" UNIQUE, btree (a) INVALIDPartitions: prt1_p1 FOR VALUES FROM (0) TO (25), prt1_p2 FOR VALUES FROM (25) TO (50), prt1_p3 FOR VALUES FROM (50) TO (75)postgres=# \\d+ prt1_p1 Table \"public.prt1_p1\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+-------------------+-----------+----------+---------+----------+--------------+------------- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition of: prt1 FOR VALUES FROM (0) TO (25)Partition constraint: ((a IS NOT NULL) AND (a >= 0) AND (a < 25))Indexes: \"prt1_p1_a_idx\" UNIQUE, btree (a)Access method: heappostgres=# \\d+ prt1_p2 Table \"public.prt1_p2\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+-------------------+-----------+----------+---------+----------+--------------+------------- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition of: prt1 FOR VALUES FROM (25) TO (50)Partition constraint: ((a IS NOT NULL) AND (a >= 25) AND (a < 50))Indexes: \"prt1_p2_a_idx\" UNIQUE, btree (a)Access method: heappostgres=# \\d+ prt1_p3 Table \"public.prt1_p3\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --------+-------------------+-----------+----------+---------+----------+--------------+------------- a | integer | | | | plain | | b | integer | | | | plain | | c | character varying | | | | extended | | Partition of: prt1 FOR VALUES FROM (50) TO (75)Partition constraint: ((a IS NOT NULL) AND (a >= 50) AND (a < 75))Access method: heap```Now we can see that the first two partitions have indexes, but the third partition has no indexes due to an error. Therefore, in our first case, it should not be what we expected that the third partition has no index.That is to say, when our CIC goes wrong, either roll back all or go down instead of stopping in the middle.This is my shallow opinion, please take it as your reference.Thank you very much, Regards, Adger------------------------------------------------------------------发件人:Justin Pryzby <pryzby@telsasoft.com>发送时间:2020年6月13日(星期六) 02:15收件人:Michael Paquier <michael@paquier.xyz>; 李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>主 题:Re: how to create index concurrently on partitioned tableOn Fri, Jun 12, 2020 at 04:17:34PM +0800, 李杰(慎追) wrote:> As we all know, CIC has three transactions. If we recursively in n partitioned tables, > it will become 3N transactions. If an error occurs in these transactions, we have too many things to deal...> > If an error occurs when an index is created in one of the partitions, > what should we do with our new index?My (tentative) understanding is that these types of things should use a\"subtransaction\" internally.. So if the toplevel transaction rolls back, itschanges are lost. In some cases, it might be desirable to not roll back, inwhich case the user(client) should first create indexes (concurrently ifneeded) on every child, and then later create index on parent (that has theadvtantage of working on older servers, too).postgres=# SET client_min_messages=debug;postgres=# CREATE INDEX ON t(i);DEBUG: building index \"t1_i_idx\" on table \"t1\" with request for 1 parallel workerDEBUG: index \"t1_i_idx\" can safely use deduplicationDEBUG: creating and filling new WAL fileDEBUG: done creating and filling new WAL fileDEBUG: creating and filling new WAL fileDEBUG: done creating and filling new WAL fileDEBUG: building index \"t2_i_idx\" on table \"t2\" with request for 1 parallel worker^C2020-06-12 13:08:17.001 CDT [19291] ERROR: canceling statement due to user request2020-06-12 13:08:17.001 CDT [19291] STATEMENT: CREATE INDEX ON t(i);2020-06-12 13:08:17.001 CDT [27410] FATAL: terminating connection due to administrator command2020-06-12 13:08:17.001 CDT [27410] STATEMENT: CREATE INDEX ON t(i);Cancel request sentIf the index creation is interrupted at this point, no indexes will exist.On Fri, Jun 12, 2020 at 04:06:28PM +0800, 李杰(慎追) wrote:> >On Sat, Jun 06, 2020 at 09:23:32AM -0500, Justin Pryzby wrote:> > I looked at CIC now and came up with the attached. All that's needed to allow> > this case is to close the relation before recursing to partitions - it needs to> > be closed before calling CommitTransactionCommand(). There's probably a better> > way to write this, but I can't see that there's anything complicated about> > handling partitioned tables.> > I'm so sorry about getting back late.> Thank you very much for helping me consider this issue.> I compiled the patch v1 you provided. And I patch v2-001 again to enter postgresql.> I got a coredump that was easy to reproduce. As follows:> I have been trying to get familiar with the source code of create index.> Can you solve this bug first? I will try my best to implement CIC with you.> Next, I will read your patchs v2-002 and v2-003.Thanks, fixed.On Fri, Jun 12, 2020 at 04:20:17PM +0900, Michael Paquier wrote:> When it comes to test behaviors specific to partitioning, there are in> my experience three things to be careful about and stress in the tests:> - Use at least two layers of partitioning.> - Include into the partition tree a partition that has no leaf> partitions.> - Test the commands on the top-most parent, a member in the middle of> the partition tree, the partition with no leaves, and one leaf, making> sure that relfilenode changes where it should and that partition trees> remain intact (you can use pg_partition_tree() for that.)Added, thanks for looking.-- Justin",
"msg_date": "Mon, 15 Jun 2020 20:15:05 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaaG93IHRvIGNyZWF0ZSBpbmRleCBjb25jdXJyZW50bHkgb24gcGFydGl0aW9u?=\n =?UTF-8?B?ZWQgdGFibGU=?="
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 08:15:05PM +0800, 李杰(慎追) wrote:\n> As shown above, an error occurred while creating an index in the second partition. \n> It can be clearly seen that the index of the partitioned table is invalid \n> and the index of the first partition is normal, the second partition is invalid, \n> and the Third Partition index does not exist at all.\n\nThat's a problem. I really think that we should make the steps of the\nconcurrent operation consistent across all relations, meaning that all\nthe indexes should be created as invalid for all the parts of the\npartition tree, including partitioned tables as well as their\npartitions, in the same transaction. Then a second new transaction\ngets used for the index build, followed by a third one for the\nvalidation that switches the indexes to become valid.\n--\nMichael",
"msg_date": "Mon, 15 Jun 2020 21:37:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "> That's a problem. I really think that we should make the steps of the\n> concurrent operation consistent across all relations, meaning that all\n> the indexes should be created as invalid for all the parts of the\n> partition tree, including partitioned tables as well as their\n> partitions, in the same transaction. Then a second new transaction\n> gets used for the index build, followed by a third one for the\n> validation that switches the indexes to become valid.\n\nThis is a good idea.\nWe should maintain the consistency of the entire partition table.\nHowever, this is not a small change in the code.\nMay be we need to make a new design for DefineIndex function....\n\nBut most importantly, if we use three steps to implement CIC, \nwe will spend more time than ordinary tables, especially in high concurrency cases.\nTo wait for one of partitions which the users to use frequently, \nthe creation of other partition indexes is delayed.\nIs it worth doing this?\n\n Regards, Adger\n\n\n\n\n\n------------------------------------------------------------------\n发件人:Michael Paquier <michael@paquier.xyz>\n发送时间:2020年6月15日(星期一) 20:37\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>\n主 题:Re: 回复:how to create index concurrently on partitioned table\n\nOn Mon, Jun 15, 2020 at 08:15:05PM +0800, 李杰(慎追) wrote:\n> As shown above, an error occurred while creating an index in the second partition. \n> It can be clearly seen that the index of the partitioned table is invalid \n> and the index of the first partition is normal, the second partition is invalid, \n> and the Third Partition index does not exist at all.\n\nThat's a problem. I really think that we should make the steps of the\nconcurrent operation consistent across all relations, meaning that all\nthe indexes should be created as invalid for all the parts of the\npartition tree, including partitioned tables as well as their\npartitions, in the same transaction. Then a second new transaction\ngets used for the index build, followed by a third one for the\nvalidation that switches the indexes to become valid.\n--\nMichael\n\n> That's a problem. I really think that we should make the steps of the> concurrent operation consistent across all relations, meaning that all> the indexes should be created as invalid for all the parts of the> partition tree, including partitioned tables as well as their> partitions, in the same transaction. Then a second new transaction> gets used for the index build, followed by a third one for the> validation that switches the indexes to become valid.This is a good idea. We should maintain the consistency of the entire partition table.However, this is not a small change in the code.May be we need to make a new design for DefineIndex function....But most importantly, if we use three steps to implement CIC, we will spend more time than ordinary tables, especially in high concurrency cases.To wait for one of partitions which the users to use frequently, the creation of other partition indexes is delayed.Is it worth doing this? Regards, Adger------------------------------------------------------------------发件人:Michael Paquier <michael@paquier.xyz>发送时间:2020年6月15日(星期一) 20:37收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>主 题:Re: 回复:how to create index concurrently on partitioned tableOn Mon, Jun 15, 2020 at 08:15:05PM +0800, 李杰(慎追) wrote:> As shown above, an error occurred while creating an index in the second partition. > It can be clearly seen that the index of the partitioned table is invalid > and the index of the first partition is normal, the second partition is invalid, > and the Third Partition index does not exist at all.That's a problem. I really think that we should make the steps of theconcurrent operation consistent across all relations, meaning that allthe indexes should be created as invalid for all the parts of thepartition tree, including partitioned tables as well as theirpartitions, in the same transaction. Then a second new transactiongets used for the index build, followed by a third one for thevalidation that switches the indexes to become valid.--Michael",
"msg_date": "Mon, 15 Jun 2020 21:33:05 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77yaaG93IHRvIGNyZWF0ZSBpbmRleCBjb25jdXJyZW50bHkgb24g?=\n =?UTF-8?B?cGFydGl0aW9uZWQgdGFibGU=?="
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 09:33:05PM +0800, 李杰(慎追) wrote:\n> This is a good idea.\n> We should maintain the consistency of the entire partition table.\n> However, this is not a small change in the code.\n> May be we need to make a new design for DefineIndex function....\n\nIndeed. I have looked at the patch set a bit and here is the related\nbit for CIC in 0001, meaning that you handle the basic index creation\nfor a partition tree within one transaction for each:\n+ /*\n+ * CIC needs to mark a partitioned table as VALID, which itself\n+ * requires setting READY, which is unset for CIC (even though\n+ * it's meaningless for an index without storage).\n+ */\n+ if (concurrent)\n+ {\n+ PopActiveSnapshot();\n+ CommitTransactionCommand();\n+ StartTransactionCommand();\n+ index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);\n+\n+ CommandCounterIncrement();\n+ index_set_state_flags(indexRelationId, INDEX_CREATE_SET_VALID);\n }\n\nI am afraid that this makes the error handling more complicated, with\nrisks of having inconsistent partition trees. That's the point you\nraised. This one is going to need more thoughts.\n\n> But most importantly, if we use three steps to implement CIC, \n> we will spend more time than ordinary tables, especially in high\n> concurrency cases. To wait for one of partitions which the users to\n> use frequently, the creation of other partition indexes is delayed. \n> Is it worth doing this?\n\nCIC is an operation that exists while allowing read and writes to\nstill happen in parallel, so that's fine IMO if it takes time. Now it\nis true that it may not scale well so we should be careful with the\napproach taken. Also, I think that the case of REINDEX would require\nless work overall because we already have some code in place to gather\nmultiple indexes from one or more relations and work on these\nconsistently, all at once.\n--\nMichael",
"msg_date": "Tue, 16 Jun 2020 10:02:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77ya5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create\n index concurrently on partitioned table"
},
{
"msg_contents": "> I am afraid that this makes the error handling more complicated, with\n> risks of having inconsistent partition trees. That's the point you\n> raised. This one is going to need more thoughts.\n> CIC is an operation that exists while allowing read and writes to\n> still happen in parallel, so that's fine IMO if it takes time. Now it\n> is true that it may not scale well so we should be careful with the\n> approach taken. Also, I think that the case of REINDEX would require\n> less work overall because we already have some code in place to gather\n> multiple indexes from one or more relations and work on these\n> consistently, all at once.\n\n\nI'm with you on that. \n(Scheme 1)\nWe can refer to the implementation in the ReindexRelationConcurrently,\n in the three phases of the REINDEX CONCURRENTLY,\nall indexes of the partitions are operated one by one in each phase.\nIn this way, we can maintain the consistency of the entire partitioned table index.\nAfter we implement CIC in this way, we can also complete reindex partitioned table index concurrently (this is not done now.)\n\nLooking back, let's talk about our original schema.\n(Scheme 2)\nIf CIC is performed one by one on each partition, \nhow can we make it on subsequent partitions continue when an error occurs in the second partition?\n If this problem can be solved, It's not that I can't accept it.\nBecause a partition CIC error is acceptable, you can reindex it later.\nPseudo indexes on partitioned tables are useless for real queries, \n but the indexes on partitions are really useful.\n\nScheme 1, more elegant, can also implement reindex concurrently on partitioned table, \nbut the implementation is more complex.\nScheme 2: simple implementation, but not so friendly.\n\nHi Jsutin,\nWhich scheme do you think is more helpful to realize CIC?\n\n\n\n\n\n\n------------------------------------------------------------------\n发件人:Michael Paquier <michael@paquier.xyz>\n发送时间:2020年6月16日(星期二) 09:02\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>\n主 题:Re: 回复:回复:how to create index concurrently on partitioned table\n\nOn Mon, Jun 15, 2020 at 09:33:05PM +0800, 李杰(慎追) wrote:\n> This is a good idea.\n> We should maintain the consistency of the entire partition table.\n> However, this is not a small change in the code.\n> May be we need to make a new design for DefineIndex function....\n\nIndeed. I have looked at the patch set a bit and here is the related\nbit for CIC in 0001, meaning that you handle the basic index creation\nfor a partition tree within one transaction for each:\n+ /*\n+ * CIC needs to mark a partitioned table as VALID, which itself\n+ * requires setting READY, which is unset for CIC (even though\n+ * it's meaningless for an index without storage).\n+ */\n+ if (concurrent)\n+ {\n+ PopActiveSnapshot();\n+ CommitTransactionCommand();\n+ StartTransactionCommand();\n+ index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);\n+\n+ CommandCounterIncrement();\n+ index_set_state_flags(indexRelationId, INDEX_CREATE_SET_VALID);\n }\n\nI am afraid that this makes the error handling more complicated, with\nrisks of having inconsistent partition trees. That's the point you\nraised. This one is going to need more thoughts.\n\n> But most importantly, if we use three steps to implement CIC, \n> we will spend more time than ordinary tables, especially in high\n> concurrency cases. To wait for one of partitions which the users to\n> use frequently, the creation of other partition indexes is delayed. \n> Is it worth doing this?\n\nCIC is an operation that exists while allowing read and writes to\nstill happen in parallel, so that's fine IMO if it takes time. Now it\nis true that it may not scale well so we should be careful with the\napproach taken. Also, I think that the case of REINDEX would require\nless work overall because we already have some code in place to gather\nmultiple indexes from one or more relations and work on these\nconsistently, all at once.\n--\nMichael\n\n> I am afraid that this makes the error handling more complicated, with> risks of having inconsistent partition trees. That's the point you> raised. This one is going to need more thoughts.> CIC is an operation that exists while allowing read and writes to> still happen in parallel, so that's fine IMO if it takes time. Now it> is true that it may not scale well so we should be careful with the> approach taken. Also, I think that the case of REINDEX would require> less work overall because we already have some code in place to gather> multiple indexes from one or more relations and work on these> consistently, all at once.I'm with you on that. (Scheme 1)We can refer to the implementation in the ReindexRelationConcurrently, in the three phases of the REINDEX CONCURRENTLY,all indexes of the partitions are operated one by one in each phase.In this way, we can maintain the consistency of the entire partitioned table index.After we implement CIC in this way, we can also complete reindex partitioned table index concurrently (this is not done now.)Looking back, let's talk about our original schema.(Scheme 2)If CIC is performed one by one on each partition, how can we make it on subsequent partitions continue when an error occurs in the second partition? If this problem can be solved, It's not that I can't accept it.Because a partition CIC error is acceptable, you can reindex it later.Pseudo indexes on partitioned tables are useless for real queries, but the indexes on partitions are really useful.Scheme 1, more elegant, can also implement reindex concurrently on partitioned table, but the implementation is more complex.Scheme 2: simple implementation, but not so friendly.Hi Jsutin,Which scheme do you think is more helpful to realize CIC?------------------------------------------------------------------发件人:Michael Paquier <michael@paquier.xyz>发送时间:2020年6月16日(星期二) 09:02收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>主 题:Re: 回复:回复:how to create index concurrently on partitioned tableOn Mon, Jun 15, 2020 at 09:33:05PM +0800, 李杰(慎追) wrote:> This is a good idea.> We should maintain the consistency of the entire partition table.> However, this is not a small change in the code.> May be we need to make a new design for DefineIndex function....Indeed. I have looked at the patch set a bit and here is the relatedbit for CIC in 0001, meaning that you handle the basic index creationfor a partition tree within one transaction for each:+ /*+ * CIC needs to mark a partitioned table as VALID, which itself+ * requires setting READY, which is unset for CIC (even though+ * it's meaningless for an index without storage).+ */+ if (concurrent)+ {+ PopActiveSnapshot();+ CommitTransactionCommand();+ StartTransactionCommand();+ index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);++ CommandCounterIncrement();+ index_set_state_flags(indexRelationId, INDEX_CREATE_SET_VALID); }I am afraid that this makes the error handling more complicated, withrisks of having inconsistent partition trees. That's the point youraised. This one is going to need more thoughts.> But most importantly, if we use three steps to implement CIC, > we will spend more time than ordinary tables, especially in high> concurrency cases. To wait for one of partitions which the users to> use frequently, the creation of other partition indexes is delayed. > Is it worth doing this?CIC is an operation that exists while allowing read and writes tostill happen in parallel, so that's fine IMO if it takes time. Now itis true that it may not scale well so we should be careful with theapproach taken. Also, I think that the case of REINDEX would requireless work overall because we already have some code in place to gathermultiple indexes from one or more relations and work on theseconsistently, all at once.--Michael",
"msg_date": "Tue, 16 Jun 2020 14:49:08 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77ya5Zue5aSN77yaaG93IHRvIGNyZWF0ZSBpbmRleCBjb25jdXJy?=\n =?UTF-8?B?ZW50bHkgb24gcGFydGl0aW9uZWQgdGFibGU=?="
},
{
"msg_contents": "> We can refer to the implementation in the ReindexRelationConcurrently,\n> in the three phases of the REINDEX CONCURRENTLY,\n> all indexes of the partitions are operated one by one in each phase.\n> In this way, we can maintain the consistency of the entire partitioned table index.\n> After we implement CIC in this way, we can also complete reindex partitioned table index concurrently (this is not done now.)\n\n After careful analysis, I found that there were two choices that embarrassed me.\n Although we can handle the entire partition tree with one transaction in each of the three phases of CIC, just like ordinary tables.\n\nHowever, I found a problem. If there are many partitions, \nwe may need to handle too many missing index entries when validate_index() .\nEspecially for the first partition, the time may have been long and many entries are missing.\nIn this case, why don't we put the second and third phase together into a transaction for each partition?\n\nSo, which schema do you think is better?\nChoose to maintain consistency in all three phases, \nor just maintain consistency in the first phase?\n\n\nThank you very much,\n Regards, Adger\n\n\n\n\n> We can refer to the implementation in the ReindexRelationConcurrently,> in the three phases of the REINDEX CONCURRENTLY,> all indexes of the partitions are operated one by one in each phase.> In this way, we can maintain the consistency of the entire partitioned table index.> After we implement CIC in this way, we can also complete reindex partitioned table index concurrently (this is not done now.) After careful analysis, I found that there were two choices that embarrassed me. Although we can handle the entire partition tree with one transaction in each of the three phases of CIC, just like ordinary tables.However, I found a problem. If there are many partitions, we may need to handle too many missing index entries when validate_index() .Especially for the first partition, the time may have been long and many entries are missing.In this case, why don't we put the second and third phase together into a transaction for each partition?So, which schema do you think is better?Choose to maintain consistency in all three phases, or just maintain consistency in the first phase?Thank you very much, Regards, Adger",
"msg_date": "Wed, 17 Jun 2020 22:22:28 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77ya5Zue5aSN77yaaG93IHRvIGNyZWF0ZSBpbmRleCBjb25jdXJy?=\n =?UTF-8?B?ZW50bHkgb24gcGFydGl0aW9uZWQgdGFibGU=?="
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 10:22:28PM +0800, 李杰(慎追) wrote:\n> However, I found a problem. If there are many partitions, \n> we may need to handle too many missing index entries when\n> validate_index(). Especially for the first partition, the time may\n> have been long and many entries are missing. In this case, why\n> don't we put the second and third phase together into a transaction\n> for each partition? \n\nNot sure I am following. In the case of REINDEX, it seems to me that\nthe calls to validate_index() and index_concurrently_build() can\nhappen in a separate transaction for each index, as long as all the\ncalls to index_concurrently_swap() are grouped together in the same\ntransaction to make sure that index partition trees are switched\nconsistently when all entries are swapped from an invalid state to a\nvalid state, because the swapping phase is also when we attach a fresh\nindex to a partition tree. See also index_concurrently_create_copy()\nwhere we don't set parentIndexRelid for the lower call to\nindex_create(). It would be good of course to check that when\nswapping we have the code to handle that for a lot of indexes at\nonce.\n--\nMichael",
"msg_date": "Thu, 18 Jun 2020 11:41:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77ya5Zue5aSN77ya5Zue?=\n =?utf-8?B?5aSN77yaaG93?= to create index concurrently on partitioned table"
},
{
"msg_contents": "> Not sure I am following. In the case of REINDEX, it seems to me that\n> the calls to validate_index() and index_concurrently_build() can\n> happen in a separate transaction for each index, as long as all the\n> calls to index_concurrently_swap() are grouped together in the same\n> transaction to make sure that index partition trees are switched\n> consistently when all entries are swapped from an invalid state to a\n> valid state, because the swapping phase is also when we attach a fresh\n> index to a partition tree. See also index_concurrently_create_copy()\n> where we don't set parentIndexRelid for the lower call to\n> index_create(). It would be good of course to check that when\n> swapping we have the code to handle that for a lot of indexes at\n> once.\n\nLet's look at this example: \nA partition table has five partitions,\n parttable: part1, part2, part3, part3 ,part5\nWe simply abstract the following definitions:\n phase 1: index_create(), it is only registered in catalogs.\n phase 2: index_concurrently_build(), Build the indexes.\n phase 3: validate_index(), insert any missing index entries, mark the index as valid.\n\n(schema 1)\n```\nStartTransaction one\nparttable phase 1\npart 1 phase 1\npart 2 phase 1\npart 3 phase 1\npart 4 phase 1\npart 5 phase 1\nCommitTransaction\n\nStartTransaction two\nparttable phase 2part 1 phase 2\npart 2 phase 2\npart 3 phase 2 (error occurred )\npart 4 phase 2\npart 5 phase 2\nCommitTransaction\n\nStartTransaction three\nparttable phase 3\npart 1 phase 3\npart 2 phase 3 \npart 3 phase 3 \npart 4 phase 4 \npart 5 phase 5 CommitTransaction\n...\n```\nNow, the following steps cannot continue due to an error in Transaction two .\nso, Transaction two roll back, Transaction three haven't started.\nAll of our indexes are invalid. In this way, \nwe ensure the strong consistency of indexes in the partition tree.\nHowever, we need to rebuild all indexes when reindex.\n\n(schema 2)\n```\nStartTransaction one\nparttable phase 1\npart 1 phase 1\npart 2 phase 1\npart 3 phase 1\npart 4 phase 1\npart 5 phase 1\nCommitTransaction\n\nStartTransaction two part 1 phase 2\npart 1 phase 3\nCommitTransaction\n\nStartTransaction three part 2 phase 2\npart 2 phase 3\nCommitTransaction\n\nStartTransaction fourpart 3 phase 2 (error occurred )\npart 3 phase 3\nCommitTransaction\n\nStartTransaction five part 4 phase 2\npart 4 phase 3\n\n\nStartTransaction sixpart 5 phase 2\npart 5 phase 3\nCommitTransaction\n\n\nStartTransaction sevenparttable phase 2\nparttable phase 3\nCommitTransaction\n\n```\n\nNow, the following steps cannot continue due to an error in Transaction four .\nso, Transaction four roll back, Transactions behind Transaction 3 have not started\nThe indexes of the p1 and p2 partitions are available. Other indexes are invalid.\nIn reindex, we can ignore the rebuild of p1 and p2.\nThis seems better, although it seems to be inconsistent.\n\nDo you think that scheme is more suitable for CIC?\n\n\nThank you very much,\n Regards, Adger\n\n\n\n\n\n\n\n\n------------------------------------------------------------------\n发件人:Michael Paquier <michael@paquier.xyz>\n发送时间:2020年6月18日(星期四) 10:41\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>\n主 题:Re: 回复:回复:回复:how to create index concurrently on partitioned table\n\nOn Wed, Jun 17, 2020 at 10:22:28PM +0800, 李杰(慎追) wrote:\n> However, I found a problem. If there are many partitions, \n> we may need to handle too many missing index entries when\n> validate_index(). Especially for the first partition, the time may\n> have been long and many entries are missing. In this case, why\n> don't we put the second and third phase together into a transaction\n> for each partition? \n\nNot sure I am following. In the case of REINDEX, it seems to me that\nthe calls to validate_index() and index_concurrently_build() can\nhappen in a separate transaction for each index, as long as all the\ncalls to index_concurrently_swap() are grouped together in the same\ntransaction to make sure that index partition trees are switched\nconsistently when all entries are swapped from an invalid state to a\nvalid state, because the swapping phase is also when we attach a fresh\nindex to a partition tree. See also index_concurrently_create_copy()\nwhere we don't set parentIndexRelid for the lower call to\nindex_create(). It would be good of course to check that when\nswapping we have the code to handle that for a lot of indexes at\nonce.\n--\nMichael\n\n> Not sure I am following. In the case of REINDEX, it seems to me that> the calls to validate_index() and index_concurrently_build() can> happen in a separate transaction for each index, as long as all the> calls to index_concurrently_swap() are grouped together in the same> transaction to make sure that index partition trees are switched> consistently when all entries are swapped from an invalid state to a> valid state, because the swapping phase is also when we attach a fresh> index to a partition tree. See also index_concurrently_create_copy()> where we don't set parentIndexRelid for the lower call to> index_create(). It would be good of course to check that when> swapping we have the code to handle that for a lot of indexes at> once.Let's look at this example: A partition table has five partitions, parttable: part1, part2, part3, part3 ,part5We simply abstract the following definitions: phase 1: index_create(), it is only registered in catalogs. phase 2: index_concurrently_build(), Build the indexes. phase 3: validate_index(), insert any missing index entries, mark the index as valid.(schema 1)```StartTransaction oneparttable phase 1part 1 phase 1part 2 phase 1part 3 phase 1part 4 phase 1part 5 phase 1CommitTransactionStartTransaction twoparttable phase 2part 1 phase 2 part 2 phase 2 part 3 phase 2 (error occurred )part 4 phase 2 part 5 phase 2 CommitTransactionStartTransaction threeparttable phase 3part 1 phase 3part 2 phase 3 part 3 phase 3 part 4 phase 4 part 5 phase 5 CommitTransaction...```Now, the following steps cannot continue due to an error in Transaction two .so, Transaction two roll back, Transaction three haven't started.All of our indexes are invalid. In this way, we ensure the strong consistency of indexes in the partition tree.However, we need to rebuild all indexes when reindex.(schema 2)```StartTransaction oneparttable phase 1part 1 phase 1part 2 phase 1part 3 phase 1part 4 phase 1part 5 phase 1CommitTransactionStartTransaction two part 1 phase 2 part 1 phase 3CommitTransactionStartTransaction three part 2 phase 2 part 2 phase 3CommitTransactionStartTransaction fourpart 3 phase 2 (error occurred )part 3 phase 3CommitTransactionStartTransaction five part 4 phase 2 part 4 phase 3StartTransaction sixpart 5 phase 2 part 5 phase 3CommitTransactionStartTransaction sevenparttable phase 2 parttable phase 3CommitTransaction```Now, the following steps cannot continue due to an error in Transaction four .so, Transaction four roll back, Transactions behind Transaction 3 have not startedThe indexes of the p1 and p2 partitions are available. Other indexes are invalid.In reindex, we can ignore the rebuild of p1 and p2.This seems better, although it seems to be inconsistent.Do you think that scheme is more suitable for CIC?Thank you very much, Regards, Adger------------------------------------------------------------------发件人:Michael Paquier <michael@paquier.xyz>发送时间:2020年6月18日(星期四) 10:41收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>主 题:Re: 回复:回复:回复:how to create index concurrently on partitioned tableOn Wed, Jun 17, 2020 at 10:22:28PM +0800, 李杰(慎追) wrote:> However, I found a problem. If there are many partitions, > we may need to handle too many missing index entries when> validate_index(). Especially for the first partition, the time may> have been long and many entries are missing. In this case, why> don't we put the second and third phase together into a transaction> for each partition? Not sure I am following. In the case of REINDEX, it seems to me thatthe calls to validate_index() and index_concurrently_build() canhappen in a separate transaction for each index, as long as all thecalls to index_concurrently_swap() are grouped together in the sametransaction to make sure that index partition trees are switchedconsistently when all entries are swapped from an invalid state to avalid state, because the swapping phase is also when we attach a freshindex to a partition tree. See also index_concurrently_create_copy()where we don't set parentIndexRelid for the lower call toindex_create(). It would be good of course to check that whenswapping we have the code to handle that for a lot of indexes atonce.--Michael",
"msg_date": "Thu, 18 Jun 2020 14:37:43 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77ya5Zue5aSN77ya5Zue5aSN77yaaG93IHRvIGNyZWF0ZSBpbmRl?=\n =?UTF-8?B?eCBjb25jdXJyZW50bHkgb24gcGFydGl0aW9uZWQgdGFibGU=?="
},
{
"msg_contents": "> Not sure I am following. In the case of REINDEX, it seems to me that\n> the calls to validate_index() and index_concurrently_build() can\n> happen in a separate transaction for each index, as long as all the\n> calls to index_concurrently_swap() are grouped together in the same\n> transaction to make sure that index partition trees are switched\n> consistently when all entries are swapped from an invalid state to a\n> valid state, because the swapping phase is also when we attach a fresh\n> index to a partition tree. See also index_concurrently_create_copy()\n> where we don't set parentIndexRelid for the lower call to\n> index_create(). It would be good of course to check that when\n> swapping we have the code to handle that for a lot of indexes at\n> once.\n\nSome errors in the last email were not clearly expressed.\n\nLet's look at this example: \nA partition table has five partitions,\n parttable: part1, part2, part3, part3 ,part5\nWe simply abstract the following definitions:\n phase 1: index_create(), it is only registered in catalogs.\n phase 2: index_concurrently_build(), Build the indexes.\n phase 3: validate_index(), insert any missing index entries, mark the index as valid.\n\n(scheme 1)\n```\nStartTransaction one\nparttable phase 1\npart 1 phase 1\npart 2 phase 1\npart 3 phase 1\npart 4 phase 1\npart 5 phase 1\nCommitTransaction\n\nStartTransaction two\nparttable phase 2part 1 phase 2\npart 2 phase 2\npart 3 phase 2 (error occurred )\npart 4 phase 2\npart 5 phase 2\nCommitTransaction\n\nStartTransaction three\nparttable phase 3\npart 1 phase 3\npart 2 phase 3 \npart 3 phase 3 \npart 4 phase 4 \npart 5 phase 5 CommitTransaction\n...\n```\nNow, the following steps cannot continue due to an error in Transaction two .\nso, Transaction two roll back, Transaction three haven't started.\nAll of our indexes are invalid. In this way, \nwe ensure the strong consistency of indexes in the partition tree.\nHowever, we need to rebuild all indexes when reindex.\n\n(scheme 2)\n```\nStartTransaction one\nparttable phase 1\npart 1 phase 1\npart 2 phase 1\npart 3 phase 1\npart 4 phase 1\npart 5 phase 1\nCommitTransaction\n\nStartTransaction two part 1 phase 2\nCommitTransaction\nStartTransaction three\npart 1 phase 3\nCommitTransaction\n\nStartTransaction fourpart 2 phase 2\nCommitTransaction\nStartTransaction five\npart 2 phase 3\nCommitTransaction\n\nStartTransaction sixpart 3 phase 2 (error occurred )\nCommitTransaction\nStartTransaction seven\npart 3 phase 3\nCommitTransaction\n\nStartTransaction xxpart 4 phase 2\nCommitTransaction\nStartTransaction xx\npart 4 phase 3\nCommitTransaction\n\nStartTransaction xxpart 5 phase 2\nCommitTransaction\nStartTransaction xx\npart 5 phase 3\nCommitTransaction\n\nStartTransaction xxparttable phase 2\nCommitTransaction\nStartTransaction xx\nparttable phase 3\nCommitTransaction\n```\n\nNow, the following steps cannot continue due to an error in Transaction six .\nso, Transaction six roll back, Transactions behind Transaction six have not started\nThe indexes of the p1 and p2 partitions are available. Other indexes are invalid.\nIn reindex, we can ignore the rebuild of p1 and p2. \nEspecially when there are many partitions, \nthis can save the rebuild of some partition indexes,\nThis seems better, although it seems to be inconsistent.\n\nDo you think that scheme is more suitable for CIC?\n\n\nThank you very much,\n Regards, Adger\n\n\n\n------------------------------------------------------------------\n发件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n发送时间:2020年6月18日(星期四) 14:37\n收件人:Michael Paquier <michael@paquier.xyz>\n抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>\n主 题:回复:回复:回复:回复:how to create index concurrently on partitioned table\n\n> Not sure I am following. In the case of REINDEX, it seems to me that\n> the calls to validate_index() and index_concurrently_build() can\n> happen in a separate transaction for each index, as long as all the\n> calls to index_concurrently_swap() are grouped together in the same\n> transaction to make sure that index partition trees are switched\n> consistently when all entries are swapped from an invalid state to a\n> valid state, because the swapping phase is also when we attach a fresh\n> index to a partition tree. See also index_concurrently_create_copy()\n> where we don't set parentIndexRelid for the lower call to\n> index_create(). It would be good of course to check that when\n> swapping we have the code to handle that for a lot of indexes at\n> once.\n\nLet's look at this example: \nA partition table has five partitions,\n parttable: part1, part2, part3, part3 ,part5\nWe simply abstract the following definitions:\n phase 1: index_create(), it is only registered in catalogs.\n phase 2: index_concurrently_build(), Build the indexes.\n phase 3: validate_index(), insert any missing index entries, mark the index as valid.\n\n(schema 1)\n```\nStartTransaction one\nparttable phase 1\npart 1 phase 1\npart 2 phase 1\npart 3 phase 1\npart 4 phase 1\npart 5 phase 1\nCommitTransaction\n\nStartTransaction two\nparttable phase 2part 1 phase 2\npart 2 phase 2\npart 3 phase 2 (error occurred )\npart 4 phase 2\npart 5 phase 2\nCommitTransaction\n\nStartTransaction three\nparttable phase 3\npart 1 phase 3\npart 2 phase 3 \npart 3 phase 3 \npart 4 phase 4 \npart 5 phase 5 CommitTransaction\n...\n```\nNow, the following steps cannot continue due to an error in Transaction two .\nso, Transaction two roll back, Transaction three haven't started.\nAll of our indexes are invalid. In this way, \nwe ensure the strong consistency of indexes in the partition tree.\nHowever, we need to rebuild all indexes when reindex.\n\n(schema 2)\n```\nStartTransaction one\nparttable phase 1\npart 1 phase 1\npart 2 phase 1\npart 3 phase 1\npart 4 phase 1\npart 5 phase 1\nCommitTransaction\n\nStartTransaction two part 1 phase 2\npart 1 phase 3\nCommitTransaction\n\nStartTransaction three part 2 phase 2\npart 2 phase 3\nCommitTransaction\n\nStartTransaction fourpart 3 phase 2 (error occurred )\npart 3 phase 3\nCommitTransaction\n\nStartTransaction five part 4 phase 2\npart 4 phase 3\n\nStartTransaction sixpart 5 phase 2\npart 5 phase 3\nCommitTransaction\n\nStartTransaction sevenparttable phase 2\nparttable phase 3\nCommitTransaction\n```\n\nNow, the following steps cannot continue due to an error in Transaction four .\nso, Transaction four roll back, Transactions behind Transaction 3 have not started\nThe indexes of the p1 and p2 partitions are available. Other indexes are invalid.\nIn reindex, we can ignore the rebuild of p1 and p2.\nThis seems better, although it seems to be inconsistent.\n\nDo you think that scheme is more suitable for CIC?\n\n\nThank you very much,\n Regards, Adger\n\n\n\n\n\n\n\n------------------------------------------------------------------\n发件人:Michael Paquier <michael@paquier.xyz>\n发送时间:2020年6月18日(星期四) 10:41\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>\n主 题:Re: 回复:回复:回复:how to create index concurrently on partitioned table\n\nOn Wed, Jun 17, 2020 at 10:22:28PM +0800, 李杰(慎追) wrote:\n> However, I found a problem. If there are many partitions, \n> we may need to handle too many missing index entries when\n> validate_index(). Especially for the first partition, the time may\n> have been long and many entries are missing. In this case, why\n> don't we put the second and third phase together into a transaction\n> for each partition? \n\nNot sure I am following. In the case of REINDEX, it seems to me that\nthe calls to validate_index() and index_concurrently_build() can\nhappen in a separate transaction for each index, as long as all the\ncalls to index_concurrently_swap() are grouped together in the same\ntransaction to make sure that index partition trees are switched\nconsistently when all entries are swapped from an invalid state to a\nvalid state, because the swapping phase is also when we attach a fresh\nindex to a partition tree. See also index_concurrently_create_copy()\nwhere we don't set parentIndexRelid for the lower call to\nindex_create(). It would be good of course to check that when\nswapping we have the code to handle that for a lot of indexes at\nonce.\n--\nMichael\n\n> Not sure I am following. In the case of REINDEX, it seems to me that> the calls to validate_index() and index_concurrently_build() can> happen in a separate transaction for each index, as long as all the> calls to index_concurrently_swap() are grouped together in the same> transaction to make sure that index partition trees are switched> consistently when all entries are swapped from an invalid state to a> valid state, because the swapping phase is also when we attach a fresh> index to a partition tree. See also index_concurrently_create_copy()> where we don't set parentIndexRelid for the lower call to> index_create(). It would be good of course to check that when> swapping we have the code to handle that for a lot of indexes at> once.Some errors in the last email were not clearly expressed.Let's look at this example: A partition table has five partitions, parttable: part1, part2, part3, part3 ,part5We simply abstract the following definitions: phase 1: index_create(), it is only registered in catalogs. phase 2: index_concurrently_build(), Build the indexes. phase 3: validate_index(), insert any missing index entries, mark the index as valid.(scheme 1)```StartTransaction oneparttable phase 1part 1 phase 1part 2 phase 1part 3 phase 1part 4 phase 1part 5 phase 1CommitTransactionStartTransaction twoparttable phase 2part 1 phase 2 part 2 phase 2 part 3 phase 2 (error occurred )part 4 phase 2 part 5 phase 2 CommitTransactionStartTransaction threeparttable phase 3part 1 phase 3part 2 phase 3 part 3 phase 3 part 4 phase 4 part 5 phase 5 CommitTransaction...```Now, the following steps cannot continue due to an error in Transaction two .so, Transaction two roll back, Transaction three haven't started.All of our indexes are invalid. In this way, we ensure the strong consistency of indexes in the partition tree.However, we need to rebuild all indexes when reindex.(scheme 2)```StartTransaction oneparttable phase 1part 1 phase 1part 2 phase 1part 3 phase 1part 4 phase 1part 5 phase 1CommitTransactionStartTransaction two part 1 phase 2 CommitTransactionStartTransaction threepart 1 phase 3CommitTransactionStartTransaction fourpart 2 phase 2 CommitTransactionStartTransaction fivepart 2 phase 3CommitTransactionStartTransaction sixpart 3 phase 2 (error occurred )CommitTransactionStartTransaction sevenpart 3 phase 3CommitTransactionStartTransaction xxpart 4 phase 2 CommitTransactionStartTransaction xxpart 4 phase 3CommitTransactionStartTransaction xxpart 5 phase 2 CommitTransactionStartTransaction xxpart 5 phase 3CommitTransactionStartTransaction xxparttable phase 2 CommitTransactionStartTransaction xxparttable phase 3CommitTransaction```Now, the following steps cannot continue due to an error in Transaction six .so, Transaction six roll back, Transactions behind Transaction six have not startedThe indexes of the p1 and p2 partitions are available. Other indexes are invalid.In reindex, we can ignore the rebuild of p1 and p2. Especially when there are many partitions, this can save the rebuild of some partition indexes,This seems better, although it seems to be inconsistent.Do you think that scheme is more suitable for CIC?Thank you very much, Regards, Adger------------------------------------------------------------------发件人:李杰(慎追) <adger.lj@alibaba-inc.com>发送时间:2020年6月18日(星期四) 14:37收件人:Michael Paquier <michael@paquier.xyz>抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>主 题:回复:回复:回复:回复:how to create index concurrently on partitioned table> Not sure I am following. In the case of REINDEX, it seems to me that> the calls to validate_index() and index_concurrently_build() can> happen in a separate transaction for each index, as long as all the> calls to index_concurrently_swap() are grouped together in the same> transaction to make sure that index partition trees are switched> consistently when all entries are swapped from an invalid state to a> valid state, because the swapping phase is also when we attach a fresh> index to a partition tree. See also index_concurrently_create_copy()> where we don't set parentIndexRelid for the lower call to> index_create(). It would be good of course to check that when> swapping we have the code to handle that for a lot of indexes at> once.Let's look at this example: A partition table has five partitions, parttable: part1, part2, part3, part3 ,part5We simply abstract the following definitions: phase 1: index_create(), it is only registered in catalogs. phase 2: index_concurrently_build(), Build the indexes. phase 3: validate_index(), insert any missing index entries, mark the index as valid.(schema 1)```StartTransaction oneparttable phase 1part 1 phase 1part 2 phase 1part 3 phase 1part 4 phase 1part 5 phase 1CommitTransactionStartTransaction twoparttable phase 2part 1 phase 2 part 2 phase 2 part 3 phase 2 (error occurred )part 4 phase 2 part 5 phase 2 CommitTransactionStartTransaction threeparttable phase 3part 1 phase 3part 2 phase 3 part 3 phase 3 part 4 phase 4 part 5 phase 5 CommitTransaction...```Now, the following steps cannot continue due to an error in Transaction two .so, Transaction two roll back, Transaction three haven't started.All of our indexes are invalid. In this way, we ensure the strong consistency of indexes in the partition tree.However, we need to rebuild all indexes when reindex.(schema 2)```StartTransaction oneparttable phase 1part 1 phase 1part 2 phase 1part 3 phase 1part 4 phase 1part 5 phase 1CommitTransactionStartTransaction two part 1 phase 2 part 1 phase 3CommitTransactionStartTransaction three part 2 phase 2 part 2 phase 3CommitTransactionStartTransaction fourpart 3 phase 2 (error occurred )part 3 phase 3CommitTransactionStartTransaction five part 4 phase 2 part 4 phase 3StartTransaction sixpart 5 phase 2 part 5 phase 3CommitTransactionStartTransaction sevenparttable phase 2 parttable phase 3CommitTransaction```Now, the following steps cannot continue due to an error in Transaction four .so, Transaction four roll back, Transactions behind Transaction 3 have not startedThe indexes of the p1 and p2 partitions are available. Other indexes are invalid.In reindex, we can ignore the rebuild of p1 and p2.This seems better, although it seems to be inconsistent.Do you think that scheme is more suitable for CIC?Thank you very much, Regards, Adger------------------------------------------------------------------发件人:Michael Paquier <michael@paquier.xyz>发送时间:2020年6月18日(星期四) 10:41收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:Justin Pryzby <pryzby@telsasoft.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>; Alvaro Herrera <alvherre@2ndquadrant.com>主 题:Re: 回复:回复:回复:how to create index concurrently on partitioned tableOn Wed, Jun 17, 2020 at 10:22:28PM +0800, 李杰(慎追) wrote:> However, I found a problem. If there are many partitions, > we may need to handle too many missing index entries when> validate_index(). Especially for the first partition, the time may> have been long and many entries are missing. In this case, why> don't we put the second and third phase together into a transaction> for each partition? Not sure I am following. In the case of REINDEX, it seems to me thatthe calls to validate_index() and index_concurrently_build() canhappen in a separate transaction for each index, as long as all thecalls to index_concurrently_swap() are grouped together in the sametransaction to make sure that index partition trees are switchedconsistently when all entries are swapped from an invalid state to avalid state, because the swapping phase is also when we attach a freshindex to a partition tree. See also index_concurrently_create_copy()where we don't set parentIndexRelid for the lower call toindex_create(). It would be good of course to check that whenswapping we have the code to handle that for a lot of indexes atonce.--Michael",
"msg_date": "Thu, 18 Jun 2020 15:01:30 +0800",
"msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77ya5Zue5aSN77ya5Zue5aSN77yaaG93IHRvIGNyZWF0ZSBpbmRl?=\n =?UTF-8?B?eCBjb25jdXJyZW50bHkgb24gcGFydGl0aW9uZWQgdGFibGU=?="
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n> On Mon, Jun 15, 2020 at 08:15:05PM +0800, 李杰(慎追) wrote:\n> > As shown above, an error occurred while creating an index in the second partition. \n> > It can be clearly seen that the index of the partitioned table is invalid \n> > and the index of the first partition is normal, the second partition is invalid, \n> > and the Third Partition index does not exist at all.\n> \n> That's a problem. I really think that we should make the steps of the\n> concurrent operation consistent across all relations, meaning that all\n> the indexes should be created as invalid for all the parts of the\n> partition tree, including partitioned tables as well as their\n> partitions, in the same transaction. Then a second new transaction\n> gets used for the index build, followed by a third one for the\n> validation that switches the indexes to become valid.\n\nNote that the mentioned problem wasn't serious: there was missing index on\nchild table, therefor the parent index was invalid, as intended. However I\nagree that it's not nice that the command can fail so easily and leave behind\nsome indexes created successfully and some failed some not created at all.\n\nBut I took your advice initially creating invalid inds.\n\nOn Tue, Jun 16, 2020 at 10:02:21AM +0900, Michael Paquier wrote:\n> CIC is an operation that exists while allowing read and writes to\n> still happen in parallel, so that's fine IMO if it takes time. Now it\n> is true that it may not scale well so we should be careful with the\n> approach taken. Also, I think that the case of REINDEX would require\n> less work overall because we already have some code in place to gather\n> multiple indexes from one or more relations and work on these\n> consistently, all at once.\n\nI'm not sure if by reindex you mean my old 0002. That's now 0001, so if it can\nbe simplified somehow, that's great..\n\nThat gave me the idea to layer CIC on top of Reindex, since I think it does\nexactly what's needed.\n\n-- \nJustin",
"msg_date": "Sat, 8 Aug 2020 01:37:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n> That gave me the idea to layer CIC on top of Reindex, since I think it does\n> exactly what's needed.\n\nFor now, I would recommend to focus first on 0001 to add support for\npartitioned tables and indexes to REINDEX. CIC is much more\ncomplicated btw, but I am not entering in the details now.\n\n- /*\n- * This may be useful when implemented someday; but that day is not today.\n- * For now, avoid erroring out when called in a multi-table context\n- * (REINDEX SCHEMA) and happen to come across a partitioned table. The\n- * partitions may be reindexed on their own anyway.\n- */\n+ /* Avoid erroring out */\n if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n {\nThis comment does not help, and actually this becomes incorrect as\nreindex for this relkind becomes supported once 0001 is done.\n\n+ case RELKIND_INDEX:\n+ reindex_index(inhrelid, false, get_rel_persistence(inhrelid),\n+ options | REINDEXOPT_REPORT_PROGRESS);\n+ break;\n+ case RELKIND_RELATION:\n+ (void) reindex_relation(inhrelid,\n+ REINDEX_REL_PROCESS_TOAST |\n+ REINDEX_REL_CHECK_CONSTRAINTS,\n+ options | REINDEXOPT_REPORT_PROGRESS);\nReindexPartitionedRel() fails to consider the concurrent case here for\npartition indexes and tables, as reindex_index()/reindex_relation()\nare the APIs used in the non-concurrent case. Once you consider the\nconcurrent case correctly, we also need to be careful with partitions\nthat have a temporary persistency (note that we don't allow partition\ntrees to mix persistency types, all partitions have to be temporary or\npermanent).\n\nI think that you are right to make the entry point to handle\npartitioned index in ReindexIndex() and partitioned table in\nReindexTable(), but the structure of the patch should be different:\n- The second portion of ReindexMultipleTables() should be moved into a\nseparate routine, taking in input a list of relation OIDs. This needs\nto be extended a bit so as reindex_index() gets called for an index\nrelkind if the relpersistence is temporary or if we have a\nnon-concurrent reindex. The idea is that we finish with a single code\npath able to work on a list of relations. And your patch adds more of\nthat as of ReindexPartitionedRel().\n- We should *not* handle directly partitioned index and/or table in\nReindexRelationConcurrently() to not complicate the logic where we\ngather all the indexes of a table/matview. So I think that the list\nof partition indexes/tables to work on should be built directly in\nReindexIndex() and ReindexTable(), and then this should call the\nsecond part of ReindexMultipleTables() refactored in the previous\npoint. This way, each partition index gets done individually in its\nown transaction. For a partition table, all indexes of this partition\nare rebuilt in the same set of transactions. For the concurrent case,\nwe have already reindex_concurrently_swap that it able to switch the\ndependencies of two indexes within a partition tree, so we can rely on\nthat so as a failure in the middle of the operation never leaves the\na partition structure in an inconsistent state.\n--\nMichael",
"msg_date": "Sun, 9 Aug 2020 14:00:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "Thanks for looking.\n\nOn Sun, Aug 09, 2020 at 02:00:09PM +0900, Michael Paquier wrote:\n> > exactly what's needed.\n> \n> For now, I would recommend to focus first on 0001 to add support for\n> partitioned tables and indexes to REINDEX. CIC is much more\n> complicated btw, but I am not entering in the details now.\n> \n> + /* Avoid erroring out */\n> if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> {\n> This comment does not help, and actually this becomes incorrect as\n> reindex for this relkind becomes supported once 0001 is done.\n\nI made a minimal change to avoid forgetting to eventually change that part.\n\n> ReindexPartitionedRel() fails to consider the concurrent case here for\n> partition indexes and tables, as reindex_index()/reindex_relation()\n> are the APIs used in the non-concurrent case. Once you consider the\n> concurrent case correctly, we also need to be careful with partitions\n> that have a temporary persistency (note that we don't allow partition\n> trees to mix persistency types, all partitions have to be temporary or\n> permanent).\n\nFixed.\n\n> I think that you are right to make the entry point to handle\n> partitioned index in ReindexIndex() and partitioned table in\n> ReindexTable(), but the structure of the patch should be different:\n> - The second portion of ReindexMultipleTables() should be moved into a\n> separate routine, taking in input a list of relation OIDs. This needs\n> to be extended a bit so as reindex_index() gets called for an index\n> relkind if the relpersistence is temporary or if we have a\n> non-concurrent reindex. The idea is that we finish with a single code\n> path able to work on a list of relations. And your patch adds more of\n> that as of ReindexPartitionedRel().\n\nIt's a good idea.\n\n> - We should *not* handle directly partitioned index and/or table in\n> ReindexRelationConcurrently() to not complicate the logic where we\n> gather all the indexes of a table/matview. So I think that the list\n> of partition indexes/tables to work on should be built directly in\n> ReindexIndex() and ReindexTable(), and then this should call the\n> second part of ReindexMultipleTables() refactored in the previous\n> point.\n\nI think I addressed these mostly as you intended.\n\n> This way, each partition index gets done individually in its\n> own transaction. For a partition table, all indexes of this partition\n> are rebuilt in the same set of transactions. For the concurrent case,\n> we have already reindex_concurrently_swap that it able to switch the\n> dependencies of two indexes within a partition tree, so we can rely on\n> that so as a failure in the middle of the operation never leaves the\n> a partition structure in an inconsistent state.\n\n-- \nJustin",
"msg_date": "Sun, 9 Aug 2020 18:44:23 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Sun, Aug 09, 2020 at 06:44:23PM -0500, Justin Pryzby wrote:\n> On Sun, Aug 09, 2020 at 02:00:09PM +0900, Michael Paquier wrote:\n>> For now, I would recommend to focus first on 0001 to add support for\n>> partitioned tables and indexes to REINDEX. CIC is much more\n>> complicated btw, but I am not entering in the details now.\n>> \n>> + /* Avoid erroring out */\n>> if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n>> {\n>> This comment does not help, and actually this becomes incorrect as\n>> reindex for this relkind becomes supported once 0001 is done.\n> \n> I made a minimal change to avoid forgetting to eventually change\n> that part.\n\nWhy not changing it then? We already filter out per relkind in all\nthe code paths calling reindex_relation(), be it in indexcmds.c for\nschema-level reindex or even tablecmds.c, so I have switched this part\nto an elog().\n\n>> - We should *not* handle directly partitioned index and/or table in\n>> ReindexRelationConcurrently() to not complicate the logic where we\n>> gather all the indexes of a table/matview. So I think that the list\n>> of partition indexes/tables to work on should be built directly in\n>> ReindexIndex() and ReindexTable(), and then this should call the\n>> second part of ReindexMultipleTables() refactored in the previous\n>> point.\n> \n> I think I addressed these mostly as you intended.\n\nMostly. I have been hacking on this patch, and basically rewrote it\nas the attached. The handling of the memory context used to keep the\nlist of partitions intact across transactions was rather clunky: the\ncontext was not reset when we are done, and we would call more APIs\nthan necessary while switching to it, like find_all_inheritors() which\ncould do much more allocations. I have fixed that by minimizing the\nareas where the private context is used, switching to it only when\nsaving a new OID in the list of partitions, or a session lock (see\nbelow for this part).\n\nWhile on it, I found that the test coverage was not enough, so I have\nextended the set of tests to make sure any concurrent and\nnon-concurrent operation for partitioned tables and indexes change the\ncorrect set of relfilenodes for each operation. I have written some\ncustom functions to minimize the duplication (the whole thing cannot\nbe grouped as those commands cannot run in a transaction block).\n\nSpeaking of which, the patch missed that REINDEX INDEX/TABLE should\nnot run in a transaction block when working on a partitioned\nrelation. And the documentation needs to be clear about the\nlimitation of each operation, so I have written more about all that.\nThe patch also has commented out areas with slashes or such, and I\nhave added some elog() and some asserts to make sure that we don't\ncross any areas that should not work with partitioned relations.\n\nWhile hacking on this patch, I have found an old bug in the REINDEX\nlogic: we build a list of relations to reindex in\nReindexMultipleTables() for schema and database reindexes, but it\nhappens that we don't recheck if the relations listed actually exists\nor not, so dropping a relation during a large reindex can cause \nsparse failures because of relations that cannot be found anymore. In\nthe case of this thread, the problem is different though (the proposed\npatch was full of holes regarding that) and we need to use session\nlocks on the parent *table* partitions (not the indexes) to avoid any\nissues within the first transaction building the list of relations to \nwork on, similarly to REINDEX CONCURRENTLY. So I fixed this problem\nthis way. For the schema and database cases, I think that we would\nneed to do something similar to VACUUM, aka have an extra code path to\nskip relations not defined. I'll leave that for another thread.\n\nOne last thing. I think that the patch is in a rather good shape, but\nthere is one error message I am not happy with when running some\ncommands in a transaction block. Say, this sequence: \nCREATE TABLE parent_tab (id int) PARTITION BY RANGE (id);\nCREATE INDEX parent_index ON parent_tab (id);\nBEGIN;\nREINDEX INDEX parent_index; -- error\nERROR: 25001: REINDEX INDEX cannot run inside a transaction block\nLOCATION: PreventInTransactionBlock, xact.c:3386\n\nThis error can be confusing, because we don't tell directly that the\nrelation involved here is partitioned, and REINDEX INDEX/TABLE are\nfine when doing their stuff on non-partitions. For other code paths,\nwe have leveraged such errors to use the grammar specific to\npartitions, for example \"CREATE TABLE .. PARTITION OF\" or such as\nthese don't cause translation issues, but we don't have a specific\nsyntax of REINDEX for partitioned relations, and I don't think that we\nneed more grammar just for that. The simplest idea I have here is to\njust use an error callback to set an errcontext(), saying roughly:\n\"while reindexing partitioned table/index %s\" while we go through\nPreventInTransactionBlock(). I have done nothing about that yet but\nadding an errcallback is simple enough. Perhaps somebody has a\ndifferent idea here?\n--\nMichael",
"msg_date": "Wed, 12 Aug 2020 13:54:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "Thanks for helping with this.\n\nOn Wed, Aug 12, 2020 at 01:54:38PM +0900, Michael Paquier wrote:\n> +++ b/src/backend/catalog/index.c\n> @@ -3661,20 +3662,12 @@ reindex_relation(Oid relid, int flags, int options)\n> +\t\telog(ERROR, \"unsupported relation kind for relation \\\"%s\\\"\",\n> +\t\t\t RelationGetRelationName(rel));\n\nI guess it should show the relkind(%c) in the message, like these:\n\nsrc/backend/commands/tablecmds.c: elog(ERROR, \"unexpected relkind: %d\", (int) relkind);\nsrc/backend/tcop/utility.c: elog(ERROR, \"unexpected relkind \\\"%c\\\" on partition \\\"%s\\\"\",\n\nISTM reindex_index is missing that, too:\n\n8b08f7d4820fd7a8ef6152a9dd8c6e3cb01e5f99\n+ if (iRel->rd_rel->relkind == RELKIND_PARTITIONED_INDEX)\n+ elog(ERROR, \"unsupported relation kind for index \\\"%s\\\"\",\n+ RelationGetRelationName(iRel));\n\n\n> diff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml\n> @@ -259,8 +263,12 @@ REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] { IN\n> </para>\n> \n> <para>\n> - Reindexing partitioned tables or partitioned indexes is not supported.\n> - Each individual partition can be reindexed separately instead.\n> + Reindexing partitioned indexes or partitioned tables is supported\n> + with respectively <command>REINDEX INDEX</command> or\n> + <command>REINDEX TABLE</command>.\n\nShould say \"..with REINDEX INDEX or REINDEX TABLE, respectively\".\n\n> + Each partition of the partitioned\n> + relation defined is rebuilt in its own transaction.\n\n=> Each partition of the specified partitioned relation is reindexed in a\nseparate transaction.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 12 Aug 2020 00:28:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 12:28:20AM -0500, Justin Pryzby wrote:\n> On Wed, Aug 12, 2020 at 01:54:38PM +0900, Michael Paquier wrote:\n>> +++ b/src/backend/catalog/index.c\n>> @@ -3661,20 +3662,12 @@ reindex_relation(Oid relid, int flags, int options)\n>> +\t\telog(ERROR, \"unsupported relation kind for relation \\\"%s\\\"\",\n>> +\t\t\t RelationGetRelationName(rel));\n> \n> I guess it should show the relkind(%c) in the message, like these:\n\nSure, but I don't see much the point in adding the relkind here\nknowing that we know which one it is.\n\n> ISTM reindex_index is missing that, too:\n> \n> 8b08f7d4820fd7a8ef6152a9dd8c6e3cb01e5f99\n> + if (iRel->rd_rel->relkind == RELKIND_PARTITIONED_INDEX)\n> + elog(ERROR, \"unsupported relation kind for index \\\"%s\\\"\",\n> + RelationGetRelationName(iRel));\n\nThe error string does not follow the usual project style either, so I\nhave updated both.\n\n>> <para>\n>> - Reindexing partitioned tables or partitioned indexes is not supported.\n>> - Each individual partition can be reindexed separately instead.\n>> + Reindexing partitioned indexes or partitioned tables is supported\n>> + with respectively <command>REINDEX INDEX</command> or\n>> + <command>REINDEX TABLE</command>.\n> \n> Should say \"..with REINDEX INDEX or REINDEX TABLE, respectively\".\n>\n>> + Each partition of the partitioned\n>> + relation defined is rebuilt in its own transaction.\n> \n> => Each partition of the specified partitioned relation is reindexed in a\n> separate transaction.\n\nThanks, good idea.\n\nI have been able to work more on this patch today, and finally added\nan error context for the transaction block check, as that's cleaner.\nIn my manual tests, I have also bumped into a case that failed with\nthe original patch (where there were no session locks), and created\nan isolation test based on it: drop of a partition leaf concurrently\nto REINDEX done on the parent. One last thing I have spotted is that\nwe failed to discard properly foreign tables defined as leaves of a\npartition tree, causing a reindex to fail, so reindex_partitions()\nought to just use RELKIND_HAS_STORAGE() to do its filtering work. I\nam leaving this patch alone for a couple of days now, and I'll try to\ncome back to it after and potentially commit it. The attached has\nbeen indented by the way.\n--\nMichael",
"msg_date": "Wed, 12 Aug 2020 22:37:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 10:37:08PM +0900, Michael Paquier wrote:\n> I have been able to work more on this patch today, and finally added\n> an error context for the transaction block check, as that's cleaner.\n> In my manual tests, I have also bumped into a case that failed with\n> the original patch (where there were no session locks), and created\n> an isolation test based on it: drop of a partition leaf concurrently\n> to REINDEX done on the parent. One last thing I have spotted is that\n> we failed to discard properly foreign tables defined as leaves of a\n> partition tree, causing a reindex to fail, so reindex_partitions()\n> ought to just use RELKIND_HAS_STORAGE() to do its filtering work. I\n> am leaving this patch alone for a couple of days now, and I'll try to\n> come back to it after and potentially commit it. The attached has\n> been indented by the way.\n\nI got to think more about the locking strategy used in this patch, and\nI am afraid that we should fix the bug with REINDEX SCHEMA/DATABASE\nfirst. What we have here is rather similar to a REINDEX SCHEMA with\nall the partitions on the same schema, so it would be better to apply\nthe same set of assumptions and logic for the reindex of partitioned\nrelations as we do for the others. This makes the whole logic more\nconsistent, but it also reduces the surface of bugs. I have created a\nseparate thread for the problem, and posted a patch:\nhttps://www.postgresql.org/message-id/20200813043805.GE11663@paquier.xyz\n\nOnce this gets done, we should then be able to get rid of the extra\nsession locking taken when building the list of partitions, limiting\nsession locks to only be taken during the concurrent reindex of a\nsingle partition (the table itself for a partition table, and the\nparent table for a partition index), making the whole operation less\ninvasive.\n--\nMichael",
"msg_date": "Fri, 14 Aug 2020 09:29:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Sun, Aug 09, 2020 at 06:44:23PM -0500, Justin Pryzby wrote:\n> Thanks for looking.\n\nThe REINDEX patch is progressing its way, so I have looked a bit at\nthe part for CIC.\n\nVisibly, the case of multiple partition layers is not handled\ncorrectly. Here is a sequence that gets broken:\nCREATE TABLE parent_tab (id int) PARTITION BY RANGE (id);\nCREATE TABLE child_0_10 PARTITION OF parent_tab\n FOR VALUES FROM (0) TO (10);\nCREATE TABLE child_10_20 PARTITION OF parent_tab\n FOR VALUES FROM (10) TO (20);\nCREATE TABLE child_20_30 PARTITION OF parent_tab\n FOR VALUES FROM (20) TO (30);\nINSERT INTO parent_tab VALUES (generate_series(0,29));\nCREATE TABLE child_30_40 PARTITION OF parent_tab\n FOR VALUES FROM (30) TO (40)\n PARTITION BY RANGE(id);\nCREATE TABLE child_30_35 PARTITION OF child_30_40\n FOR VALUES FROM (30) TO (35);\nCREATE TABLE child_35_40 PARTITION OF child_30_40\n FOR VALUES FROM (35) TO (40);\nINSERT INTO parent_tab VALUES (generate_series(30,39));\nCREATE INDEX CONCURRENTLY parent_index ON parent_tab (id);\n\nThis fails as follows:\nERROR: XX000: unrecognized node type: 2139062143\nLOCATION: copyObjectImpl, copyfuncs.c:5718\n\nAnd the problem visibly comes from some handling with the second level\nof partitions, child_30_40 in my example above. Even with that, this\noutlines a rather severe problem in the patch: index_set_state_flags()\nflips indisvalid on/off using a non-transactional update (see the use\nof heap_inplace_update), meaning that if we fail in the middle of the\noperation we may finish with a partition index tree where some of the\nindexes are valid and some of them are invalid. In order to make this\nlogic consistent, I am afraid that we will need to do two things:\n- Change index_set_state_flags() so as it uses a transactional\nupdate. That's something I would like to change for other reasons,\nlike making sure that the REPLICA IDENTITY of a parent relation is\ncorrectly reset when dropping a replica index.\n- Make all the indexes of the partition tree valid in the *same*\nsub-transaction.\nYou can note that this case is different than a concurrent REINDEX,\nbecause in this case we just do an in-place change between the old and\nnew index, meaning that even if there is a failure happening while\nprocessing, we may have some invalid indexes, but there are still\nvalid indexes attached to the partition tree, at any time.\n\n+ MemoryContext oldcontext = MemoryContextSwitchTo(ind_context);\n PartitionDesc partdesc = RelationGetPartitionDesc(rel);\n int nparts = partdesc->nparts;\n+ char *relname = pstrdup(RelationGetRelationName(rel));\nEr, no. We should not play with the relation cache calls in a private\nmemory context. I think that this needs much more thoughts.\n--\nMichael",
"msg_date": "Tue, 1 Sep 2020 14:51:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 09:29:45AM +0900, Michael Paquier wrote:\n> Once this gets done, we should then be able to get rid of the extra\n> session locking taken when building the list of partitions, limiting\n> session locks to only be taken during the concurrent reindex of a\n> single partition (the table itself for a partition table, and the\n> parent table for a partition index), making the whole operation less\n> invasive.\n\nThe problem with dropped relations in REINDEX has been addressed by\n1d65416, so I have gone through this patch again and simplified the\nuse of session locks, these being taken only when doing a REINDEX\nCONCURRENTLY for a given partition. This part is in a rather\ncommittable shape IMO, so I would like to get it done first, before\nlooking more at the other cases with CIC and CLUSTER. I am still\nplanning to go through it once again.\n--\nMichael",
"msg_date": "Wed, 2 Sep 2020 10:39:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On 02.09.2020 04:39, Michael Paquier wrote:\n>\n> The problem with dropped relations in REINDEX has been addressed by\n> 1d65416, so I have gone through this patch again and simplified the\n> use of session locks, these being taken only when doing a REINDEX\n> CONCURRENTLY for a given partition. This part is in a rather\n> committable shape IMO, so I would like to get it done first, before\n> looking more at the other cases with CIC and CLUSTER. I am still\n> planning to go through it once again.\n> --\n> Michael\n\nThank you for advancing this work.\n\nI was reviewing the previous version, but the review became outdated \nbefore I sent it. Overall design is fine, but I see a bunch of things \nthat need to be fixed before commit.\n\nFirst of all, this patch fails at cfbot:\n\nindexcmds.c:2848:7: error: variable �parentoid� set but not used \n[-Werror=unused-but-set-variable]\nOid parentoid;^\n\n\nIt seems to be just a typo. With this minor fix the patch compiles and \npasses tests.\n\ndiff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\nindex 75008eebde..f5b3c53a83 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -2864,7 +2864,7 @@ reindex_partitions(Oid relid, int options, bool concurrent,\n \n /* Save partition OID */\n old_context = MemoryContextSwitchTo(reindex_context);\n- partitions = lappend_oid(partitions, partoid);\n+ partitions = lappend_oid(partitions, parentoid);\n MemoryContextSwitchTo(old_context);\n }\n\n\nIf this guessed fix is correct, I see the problem in the patch logic. In \nreindex_partitions() we collect parent relations to pass them to \nreindex_multiple_internal(). It implicitly changes the logic from \nREINDEX INDEX to REINDEX RELATION, which is not the same, if table has \nmore than one index. For example, if I add one more index to a \npartitioned table from a create_index.sql test:\n\ncreate index idx on concur_reindex_part_0_2 using btree (c2);\n\nand call\n\nREINDEX INDEX CONCURRENTLY concur_reindex_part_index;\n\nidx will be reindexed as well. I doubt that it is the desired behavior.\n\n\nA few more random issues, that I noticed at first glance:\n\n1) in create_index.sql\n\nAre these two lines intentional checks that must fail? If so, I propose \nto add a comment.\n\nREINDEX TABLE concur_reindex_part_index;\nREINDEX TABLE CONCURRENTLY concur_reindex_part_index;\n\nA few lines around also look like they were copy-pasted and need a \nsecond look.\n\n2)� This part of ReindexRelationConcurrently() is misleading.\n\n ��� ��� case RELKIND_PARTITIONED_TABLE:\n ��� ��� case RELKIND_PARTITIONED_INDEX:\n ��� ��� default:\n ��� ��� ��� /* Return error if type of relation is not supported */\n ��� ��� ��� ereport(ERROR,\n ��� ��� ��� ��� ��� (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n ��� ��� ��� ��� ��� �errmsg(\"cannot reindex this type of relation concurrently\")));\n\n\nMaybe assertion is enough. It seems, that we should never reach this \ncode because both ReindexTable and ReindexIndex handle those relkinds \nseparately.� Which leads me to the next question.\n\n3) Is there a reason, why reindex_partitions() is not inside \nReindexRelationConcurrently() ? I think that logically it belongs there.\n\n4) I haven't tested multi-level partitions yet. In any case, it would be \ngood to document this behavior explicitly.\n\n\nI need a bit more time to review this patch more thoroughly. Please, \nwait for it, before committing.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 02.09.2020 04:39, Michael Paquier\n wrote:\n\n\nThe problem with dropped relations in REINDEX has been addressed by\n1d65416, so I have gone through this patch again and simplified the\nuse of session locks, these being taken only when doing a REINDEX\nCONCURRENTLY for a given partition. This part is in a rather\ncommittable shape IMO, so I would like to get it done first, before\nlooking more at the other cases with CIC and CLUSTER. I am still\nplanning to go through it once again.\n--\nMichael\n\n\nThank you for advancing this work.\n\n I was reviewing the previous version, but the review became\n outdated before I sent it. Overall design is fine, but I see a\n bunch of things that need to be fixed before commit.\n\n First of all, this patch fails at cfbot:\n\nindexcmds.c:2848:7: error: variable �parentoid� set but not used [-Werror=unused-but-set-variable]\n Oid parentoid; ^\n\n It seems to be just a typo. With this minor fix the patch compiles\n and passes tests.\n\n\ndiff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\nindex 75008eebde..f5b3c53a83 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -2864,7 +2864,7 @@ reindex_partitions(Oid relid, int options, bool concurrent,\n \n /* Save partition OID */\n old_context = MemoryContextSwitchTo(reindex_context);\n- partitions = lappend_oid(partitions, partoid);\n+ partitions = lappend_oid(partitions, parentoid);\n MemoryContextSwitchTo(old_context);\n }\n\n If this guessed fix is correct, I see the problem in the patch\n logic. In reindex_partitions() we collect parent relations to pass\n them to reindex_multiple_internal(). It implicitly changes the logic\n from REINDEX INDEX to REINDEX RELATION, which is not the same, if\n table has more than one index. For example, if I add one more index\n to a partitioned table from a create_index.sql test:\ncreate index idx on concur_reindex_part_0_2 using btree (c2);\n\nand call\n\nREINDEX INDEX CONCURRENTLY concur_reindex_part_index;\n\n idx will be reindexed as well. I doubt that it is the desired\n behavior.\n\n A few more random issues, that I noticed at first glance:\n\n 1) in create_index.sql\n\n Are these two lines intentional checks that must fail? If so, I\n propose to add a comment.\n\n REINDEX TABLE concur_reindex_part_index;\n REINDEX TABLE CONCURRENTLY concur_reindex_part_index;\n\n A few lines around also look like they were copy-pasted and need a\n second look.\n\n 2)� This part of ReindexRelationConcurrently() is misleading. \n\n\n��� ��� case RELKIND_PARTITIONED_TABLE:\n��� ��� case RELKIND_PARTITIONED_INDEX:\n��� ��� default:\n��� ��� ��� /* Return error if type of relation is not supported */\n��� ��� ��� ereport(ERROR,\n��� ��� ��� ��� ��� (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n��� ��� ��� ��� ��� �errmsg(\"cannot reindex this type of relation concurrently\")));\n\n Maybe assertion is enough. It seems, that we should never reach\n this code because both ReindexTable and ReindexIndex handle those\n relkinds separately.� Which leads me to the next question.\n\n 3) Is there a reason, why reindex_partitions() is not inside\n ReindexRelationConcurrently() ? I think that logically it belongs\n there.\n\n 4) I haven't tested multi-level partitions yet. In any case, it\n would be good to document this behavior explicitly.\n\n\n I need a bit more time to review this patch more thoroughly.\n Please, wait for it, before committing.\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 3 Sep 2020 22:02:58 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IOWbnuWkje+8mmhvdyB0byBjcmVhdGUgaW5kZXggY29uY3VycmVu?=\n =?UTF-8?Q?tly_on_partitioned_table?="
},
{
"msg_contents": "On Thu, Sep 03, 2020 at 10:02:58PM +0300, Anastasia Lubennikova wrote:\n> First of all, this patch fails at cfbot:\n> \n> indexcmds.c:2848:7: error: variable ‘parentoid’ set but not used\n> [-Werror=unused-but-set-variable]\n> Oid parentoid;^\n\nMissed this warning, thanks for pointing it out. This is an incorrect\nrebase: this variable was used as a workaround to take a session lock\non the parent table to be reindexed, session lock logic existing to\nprevent cache lookup errors when dropping some portions of the tree\nconcurrently (1d65416 as you already know). But we don't need that\nanymore.\n\n> If this guessed fix is correct, I see the problem in the patch logic. In\n> reindex_partitions() we collect parent relations to pass them to\n> reindex_multiple_internal(). It implicitly changes the logic from REINDEX\n> INDEX to REINDEX RELATION, which is not the same, if table has more than one\n> index.\n\nIncorrect guess here. parentoid refers to the parent table of an\nindex, so by saving its OID in the list of things to reindex, you\nwould finish by reindexing all the indexes of a partition. We need to\nuse partoid, as returned by find_all_inheritors() for all the members\nof the partition tree (relid can be a partitioned index or partitioned\ntable in reindex_partitions).\n\n> 1) in create_index.sql\n> \n> Are these two lines intentional checks that must fail? If so, I propose to\n> add a comment.\n> \n> REINDEX TABLE concur_reindex_part_index;\n> REINDEX TABLE CONCURRENTLY concur_reindex_part_index;\n> \n> A few lines around also look like they were copy-pasted and need a second\n> look.\n\nYou can note some slight differences though. These are test cases for\nREINDEX INDEX with tables, and REINDEX TABLE with indexes. What you\nare quoting here is the part for indexes with REINDEX TABLE. And\nthere are already comments, see:\n\"-- REINDEX INDEX fails for partitioned tables\"\n\"-- REINDEX TABLE fails for partitioned indexes\"\n\nPerhaps that's not enough, so I have added some more comments to\noutline that these commands fail (8 commands in total).\n\n> 2) This part of ReindexRelationConcurrently() is misleading.\n> \n> Maybe assertion is enough. It seems, that we should never reach this code\n> because both ReindexTable and ReindexIndex handle those relkinds\n> separately. Which leads me to the next question.\n\nYes, we could use an assert, but I did not feel any strong need to\nchange that either for this patch.\n\n> 3) Is there a reason, why reindex_partitions() is not inside\n> ReindexRelationConcurrently() ? I think that logically it belongs there.\n\nYes, it simplifies the build of the index list indexIds, as there is\nno need to loop back into a different routine if working on a table or\na matview.\n\n> 4) I haven't tested multi-level partitions yet. In any case, it would be\n> good to document this behavior explicitly.\n\nNot sure what addition we could do here. The patch states that each\npartition of the partitioned relation defined gets reindexed, which\nimplies that this handles multiple layers automatically.\n\n> I need a bit more time to review this patch more thoroughly. Please, wait\n> for it, before committing.\n\nGlad to hear that, please take the time you need.\n--\nMichael",
"msg_date": "Fri, 4 Sep 2020 09:51:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Fri, Sep 04, 2020 at 09:51:13AM +0900, Michael Paquier wrote:\n> Glad to hear that, please take the time you need.\n\nAttached is a rebased version to address the various conflicts after\n844c05a.\n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 10:58:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On 07.09.2020 04:58, Michael Paquier wrote:\n> On Fri, Sep 04, 2020 at 09:51:13AM +0900, Michael Paquier wrote:\n>> Glad to hear that, please take the time you need.\n> Attached is a rebased version to address the various conflicts after\n> 844c05a.\n> --\n> Michael\n>\nThank you.\n\nWith the fix for the cycle in reindex_partitions() and new comments \nadded, I think this patch is ready for commit.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 7 Sep 2020 17:23:58 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IOWbnuWkje+8mmhvdyB0byBjcmVhdGUgaW5kZXggY29uY3VycmVu?=\n =?UTF-8?Q?tly_on_partitioned_table?="
},
{
"msg_contents": "Thanks for completing and pushing the REINDEX patch and others.\n\nHere's a rebasified + fixed version of the others.\n\nOn Tue, Sep 01, 2020 at 02:51:58PM +0900, Michael Paquier wrote:\n> The REINDEX patch is progressing its way, so I have looked a bit at\n> the part for CIC.\n> \n> Visibly, the case of multiple partition layers is not handled\n> correctly. Here is a sequence that gets broken:\n..\n> This fails as follows:\n> ERROR: XX000: unrecognized node type: 2139062143\n> LOCATION: copyObjectImpl, copyfuncs.c:5718\n\nBecause copyObject needed to be called within a longlived context.\n\nAlso, my previous revision failed to implement your suggestion to first build\ncatalog entries with INVALID indexes and to then reindex them. Fixed.\n\n-- \nJustin",
"msg_date": "Mon, 7 Sep 2020 21:39:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Mon, Sep 07, 2020 at 09:39:16PM -0500, Justin Pryzby wrote:\n> Also, my previous revision failed to implement your suggestion to first build\n> catalog entries with INVALID indexes and to then reindex them. Fixed.\n\n- childStmt->oldCreateSubid = InvalidSubTransactionId;\n- childStmt->oldFirstRelfilenodeSubid = childStmt->InvalidSubTransactionId;\n+ // childStmt->oldCreateSubid = childStmt->InvalidSubTransactionId;\n+ // childStmt->oldFirstRelfilenodeSubid = InvalidSubTransactionId;\nIn the CIC patch, what is that about? It is hard to follow what you\nare trying to achieve here.\n\n+ index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);\n+ CommandCounterIncrement();\n+ index_set_state_flags(indexRelationId, INDEX_CREATE_SET_VALID);\nAnyway, this part of the logic is still not good here. If we fail in\nthe middle here, we may run into problems with a single partition\nindex that has only a portion of its flags set. As this gets called\nfor each partitioned index, it also means that we could still finish\nin a state where we have only a partially-valid partition tree. For\nexample, imagine a partition tree with 2 levels (as reported by\npg_partition_tree), then the following happens if an index is created\nconcurrently from the partitioned table of level 0:\n1) indexes are created in level 2 first\n2) partitioned table of level 1 is switched to be ready and valid\n3) indexes of level 1 are created.\n4) partitioned table of level 0 is switched to be ready and valid\nIf the command has a failure, say between 2 and 3, we would have as\nresult a command that has partially succeeded in creating a partition\ntree, while the user was looking for an index for the full tree. This\ncomes back to my previous points, where we should make\nindex_set_state_flags() first, and I have begun a separate thread\nabout that:\nhttps://commitfest.postgresql.org/30/2725/\n\nSecond, it also means that the patch should really switch all the\nindexes to be valid in one single transaction, and that we also need\nmore careful refactoring of DefineIndex().\n\nI also had a quick look at the patch for CLUSTER, and it does a much\nbetter job, still it has issues.\n\n+ MemoryContext old_context = MemoryContextSwitchTo(cluster_context);\n+\n+ inhoids = find_all_inheritors(indexOid, NoLock, NULL);\n+ foreach(lc, inhoids)\nThe area where a private memory context is used should be minimized.\nIn this case, you just need to hold the context while saving the\nrelation and clustered index information in the list of RelToCluster\nitems. As a whole, this case is more simple than CIC, so I'd like to\nthink that it would be good to work on that as next target.\n\nComing to my last point.. This thread has dealt since the beginning\nwith three different problems:\n- REINDEX on partitioned relations.\n- CLUSTER on partitioned relations.\n- CIC on partitioned relations. (Should we also care about DROP INDEX\nCONCURRENTLY as well?)\n\nThe first problem has been solved, not the two others yet. Do you\nthink that it could be possible to begin two new separate threads for\nthe remaining issues, with dedicated CF entries? We could also mark\nthe existing one as committed, retitled for REINDEX as a matter of\nclarity. Also, please note that I am not sure if I will be able to\nlook more at this thread in this CF.\n--\nMichael",
"msg_date": "Tue, 8 Sep 2020 13:31:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Tue, Sep 08, 2020 at 01:31:05PM +0900, Michael Paquier wrote:\n> - CIC on partitioned relations. (Should we also care about DROP INDEX\n> CONCURRENTLY as well?)\n\nDo you have any idea what you think that should look like for DROP INDEX\nCONCURRENTLY ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 11 Sep 2020 19:13:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Fri, Sep 11, 2020 at 07:13:01PM -0500, Justin Pryzby wrote:\n> On Tue, Sep 08, 2020 at 01:31:05PM +0900, Michael Paquier wrote:\n>> - CIC on partitioned relations. (Should we also care about DROP INDEX\n>> CONCURRENTLY as well?)\n> \n> Do you have any idea what you think that should look like for DROP INDEX\n> CONCURRENTLY ?\n\nMaking the maintenance of the partition tree consistent to the user is\nthe critical part here, so my guess on this matter is:\n1) Remove each index from the partition tree and mark the indexes as\ninvalid in the same transaction. This makes sure that after commit no\nindexes would get used for scans, and the partition dependency tree\npis completely removed with the parent table. That's close to what we\ndo in index_concurrently_swap() except that we just want to remove the\ndependencies with the partitions, and not just swap them of course.\n2) Switch each index to INDEX_DROP_SET_DEAD, one per transaction\nshould be fine as that prevents inserts.\n3) Finish the index drop.\n\nStep 2) and 3) could be completely done for each index as part of\nindex_drop(). The tricky part is to integrate 1) cleanly within the\nexisting dependency machinery while still knowing about the list of\npartitions that can be removed. I think that this part is not that\nstraight-forward, but perhaps we could just make this logic part of\nRemoveRelations() when listing all the objects to remove.\n--\nMichael",
"msg_date": "Sat, 12 Sep 2020 10:35:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Sat, Sep 12, 2020 at 10:35:34AM +0900, Michael Paquier wrote:\n> On Fri, Sep 11, 2020 at 07:13:01PM -0500, Justin Pryzby wrote:\n> > On Tue, Sep 08, 2020 at 01:31:05PM +0900, Michael Paquier wrote:\n> >> - CIC on partitioned relations. (Should we also care about DROP INDEX\n> >> CONCURRENTLY as well?)\n> > \n> > Do you have any idea what you think that should look like for DROP INDEX\n> > CONCURRENTLY ?\n> \n> Making the maintenance of the partition tree consistent to the user is\n> the critical part here, so my guess on this matter is:\n> 1) Remove each index from the partition tree and mark the indexes as\n> invalid in the same transaction. This makes sure that after commit no\n> indexes would get used for scans, and the partition dependency tree\n> pis completely removed with the parent table. That's close to what we\n> do in index_concurrently_swap() except that we just want to remove the\n> dependencies with the partitions, and not just swap them of course.\n> 2) Switch each index to INDEX_DROP_SET_DEAD, one per transaction\n> should be fine as that prevents inserts.\n> 3) Finish the index drop.\n> \n> Step 2) and 3) could be completely done for each index as part of\n> index_drop(). The tricky part is to integrate 1) cleanly within the\n> existing dependency machinery while still knowing about the list of\n> partitions that can be removed. I think that this part is not that\n> straight-forward, but perhaps we could just make this logic part of\n> RemoveRelations() when listing all the objects to remove.\n\nThanks.\n\nI see three implementation ideas..\n\n1. I think your way has an issue that the dependencies are lost. If there's an\ninterruption, the user is maybe left with hundreds or thousands of detached\nindexes to clean up. This is strange since there's actually no detach command\nfor indexes (but they're implicitly \"attached\" when a matching parent index is\ncreated). A 2nd issue is that DETACH currently requires an exclusive lock (but\nsee Alvaro's WIP patch).\n\n2. Maybe the easiest way is to mark all indexes invalid and then drop all\npartitions (concurrently) and then the partitioned parent. If interrupted,\nthis would leave a parent index marked \"invalid\", and some child tables with no\nindexes. I think this may be \"ok\". The same thing is possible if a concurrent\nbuild is interrupted, right ?\n\n3. I have a patch which changes index_drop() to \"expand\" a partitioned index into\nits list of children. Each of these becomes a List:\n| indexId, heapId, userIndexRelation, userHeapRelation, heaplocktag, heaprelid, indexrelid\nThe same process is followed as for a single index, but handling all partitions\nat once in two transactions total. Arguably, this is bad since that function\ncurrently takes a single Oid but would now ends up operating on a list of indexes.\n\nAnyway, for now this is rebased on 83158f74d.\n\n-- \nJustin",
"msg_date": "Mon, 14 Sep 2020 09:31:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Mon, Sep 14, 2020 at 09:31:03AM -0500, Justin Pryzby wrote:\n> Anyway, for now this is rebased on 83158f74d.\n\nI have not thought yet about all the details of CIC and DIC and what\nyou said upthread, but I have gone through CLUSTER for now, as a\nstart. So, the goal of the patch, as I would define it, is to give a\nway to CLUSTER to work on a partitioned table using a given\npartitioned index. Hence, we would perform CLUSTER automatically from\nthe top-most parent for any partitions that have an index on the same\npartition tree as the partitioned index defined in USING, switching\nindisclustered automatically depending on the index used.\n\n+CREATE TABLE clstrpart3 PARTITION OF clstrpart DEFAULT PARTITION BY RANGE(a);\n+CREATE TABLE clstrpart33 PARTITION OF clstrpart3 DEFAULT;\n CREATE INDEX clstrpart_idx ON clstrpart (a);\n-ALTER TABLE clstrpart CLUSTER ON clstrpart_idx\nSo.. For any testing of partitioned trees, we should be careful to\ncheck if relfilenodes have been changed or not as part of an\noperation, to see if the operation has actually done something.\n\nFrom what I can see, attempting to use a CLUSTER on a top-most\npartitioned table fails to work on child tables, but isn't the goal of\nthe patch to make sure that if we attempt to do the operation on a\npartitioned table using a partitioned index, then the operation should\nbe done as well on the partitions with the partition index part of the\nsame partition tree as the parent partitioned index? If using CLUSTER\non a new partitioned index with USING, it seems to me that we may want\nto check that indisclustered is correctly set for all the partitions\nconcerned in the regression tests. It would be good also to check if\nwe have a partition index tree that maps partially with a partition\ntable tree (aka no all table partitions have a partition index), where\nthese don't get clustered because there is no index to work on.\n\n+ MemoryContext old_context = MemoryContextSwitchTo(cluster_context);\n+ inhoids = find_all_inheritors(indexOid, NoLock, NULL);\n+ MemoryContextSwitchTo(old_context);\nEr, isn't that incorrect? I would have assumed that what should be\nsaved in the context of cluster is the list of RelToCluster items.\nAnd we should not do any lookup of the partitions in a different\ncontext, because this may do allocations of things we don't really\nneed to keep around for the context dedicated to CLUSTER. Isn't\nNoLock unsafe here, even knowing that an exclusive lock is taken on\nthe parent? It seems to me that at least schema changes should be\nprevented with a ShareLock in the first transaction building the list\nof elements to cluster.\n\n+ /*\n+ * We have a full list of direct and indirect children, so skip\n+ * partitioned tables and just handle their children.\n+ */\n+ if (get_rel_relkind(relid) == RELKIND_PARTITIONED_TABLE)\n+ continue;\nIt would be better to use RELKIND_HAS_STORAGE here.\n\nAll this stuff needs to be documented clearly.\n--\nMichael",
"msg_date": "Thu, 24 Sep 2020 17:11:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n> start. So, the goal of the patch, as I would define it, is to give a\n> way to CLUSTER to work on a partitioned table using a given\n> partitioned index. Hence, we would perform CLUSTER automatically from\n> the top-most parent for any partitions that have an index on the same\n> partition tree as the partitioned index defined in USING, switching\n> indisclustered automatically depending on the index used.\n\nI think that's right, except there's no need to look for a compatible\npartitioned index: we just use the child index.\n\nAlso, if a partitioned index is clustered, when we clear indisclustered for\nother indexes, should we also propogate that to their parent indexes, if any ?\n\n> From what I can see, attempting to use a CLUSTER on a top-most\n> partitioned table fails to work on child tables,\n\nOops - it looks like this patch never worked right, due to the RECHECK on\nindisclustered. I think maybe I returned to the CIC patch and never finishing\nwith cluster.\n\n> It would be good also to check if\n> we have a partition index tree that maps partially with a partition\n> table tree (aka no all table partitions have a partition index), where\n> these don't get clustered because there is no index to work on.\n\nThis should not happen, since a incomplete partitioned index is \"invalid\".\n\n> Isn't NoLock unsafe here, even knowing that an exclusive lock is taken on\n> the parent? It seems to me that at least schema changes should be\n> prevented with a ShareLock in the first transaction building the list\n> of elements to cluster.\n\nThanks for noticing. I chose ShareUpdateExclusiveLock since it's\nset-exclusive.\n\n-- \nJustin",
"msg_date": "Sat, 26 Sep 2020 14:56:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> Also, if a partitioned index is clustered, when we clear indisclustered for\n> other indexes, should we also propogate that to their parent indexes, if any ?\n\nI am not sure what you mean here. Each partition's cluster runs in\nits own individual transaction based on the patch you sent. Are you\nsuggesting to update indisclustered for the partitioned index of a\npartitioned table and all its parent partitioned in the same\ntransaction, aka a transaction working on the partitioned table?\nDoesn't that mean that if we have a partition tree with multiple\nlayers then we finish by doing multiple time the same operation for\nthe parents?\n\n>> It would be good also to check if\n>> we have a partition index tree that maps partially with a partition\n>> table tree (aka no all table partitions have a partition index), where\n>> these don't get clustered because there is no index to work on.\n>\n> This should not happen, since a incomplete partitioned index is \"invalid\".\n\nIndeed, I did not know this property. I can see also that you have\nadded a test for this case, so that's good if we can rely on that. I\nam still in the process of reviewing this patch, all this handling\naround indisclustered makes it rather complex.\n--\nMichael",
"msg_date": "Mon, 5 Oct 2020 17:46:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Mon, Oct 05, 2020 at 05:46:27PM +0900, Michael Paquier wrote:\n> On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> > Also, if a partitioned index is clustered, when we clear indisclustered for\n> > other indexes, should we also propogate that to their parent indexes, if any ?\n> \n> I am not sure what you mean here. Each partition's cluster runs in\n> its own individual transaction based on the patch you sent. Are you\n> suggesting to update indisclustered for the partitioned index of a\n> partitioned table and all its parent partitioned in the same\n> transaction, aka a transaction working on the partitioned table?\n\nNo, I mean that if a partition is no longer clustered on some index, then its\nparent isn't clustered on that indexes parent, either.\n\nIt means that we might do N catalog updates for a partition heirarchy that's N\nlevels deep. Normally, N=2, and we'd clear indisclustered for the index as\nwell as its parent. This is not essential, though.\n\n> >> It would be good also to check if\n> >> we have a partition index tree that maps partially with a partition\n> >> table tree (aka no all table partitions have a partition index), where\n> >> these don't get clustered because there is no index to work on.\n> >\n> > This should not happen, since a incomplete partitioned index is \"invalid\".\n> \n> Indeed, I did not know this property.\n\nI think that's something we can apply for CIC/DIC, too.\nIt's not essential to avoid leaving an \"invalid\" or partial index if\ninterrupted. It's only essential that a partial, partitioned index is not\n\"valid\".\n\nFor DROP IND CONCURRENTLY, I wrote:\n\nOn Mon, Sep 14, 2020 at 09:31:03AM -0500, Justin Pryzby wrote:\n> 2. Maybe the easiest way is to mark all indexes invalid and then drop all\n> partitions (concurrently) and then the partitioned parent. If interrupted,\n> this would leave a parent index marked \"invalid\", and some child tables with no\n> indexes. I think this may be \"ok\". The same thing is possible if a concurrent\n> build is interrupted, right ?\n\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Oct 2020 15:08:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Mon, Oct 05, 2020 at 03:08:32PM -0500, Justin Pryzby wrote:\n> It means that we might do N catalog updates for a partition heirarchy that's N\n> levels deep. Normally, N=2, and we'd clear indisclustered for the index as\n> well as its parent. This is not essential, though.\n\nHmm. I got to think more about this one, and being able to ensure a\nconsistent state of indisclustered for partitioned tables and all\ntheir partitions across multiple transactions at all times would be a\nnice property, as we could easily track down if an operation has\nfailed in-flight. The problem here is that we are limited by\nindisclustered being a boolean, so we may set indisclustered one way\nor another on some partitions, or on some parent partitioned tables,\nbut we are not sure if what's down is actually clustered for the\nleaves of a partitioned index, or not. Or maybe we even have an\ninconsistent state, so this does not provide a good user experience.\nThe consistency of a partition tree is a key thing, and we have such \nguarantees with REINDEX (with or without concurrently), so what I'd\nlike to think that what makes sense for indisclustered on a\npartitioned index is to have the following set of guarantees, and\nrelying on indisvalid as being true iff an index partition tree is\ncomplete on a given table partition tree is really important:\n- If indisclustered is set to true for a partitioned index, then we\nhave the guarantee that all its partition and partitioned indexes have\nindisclustered set to true, across all the layers down to the leaves.\n- If indisclustered is set to false for a partitioned index, then we\nhave the guarantee that all its partition and partitioned indexes have\nindisclustered set to false.\n\nIf we happen to attach a new partition to a partitioned table of such\na tree, I guess that it is then logic to have indisclustered set to\nthe same value as the partitioned index when the partition inherits an\nindex. So, it seems to me that with the current catalogs we are\nlimited by indisclustered as being a boolean to maintain some kind of\nconsistency across transactions of CLUSTER, as we would use one\ntransaction per leaf for the work. In order to make things better\nhere, we could switch indisclustered to a char, able to use three\nstates:\n- 'e' or enabled, equivalent to indisclustered == true. There should\nbe only one index on a table with 'e' set at a given time.\n- 'd' or disabled, equivalent to indisclustered == false.\n- a new third state, for an equivalent of work-in-progress, let's call\nit 'w', or whatever.\n\nThen, when we begin a CLUSTER on a partitioned table, I would imagine\nthe following:\n- Switch all the indexes selected to 'w' in a first transaction, and\ndo not reset the state of other indexes if there is one 'e'. For\nCLUSTER without USING, we switch the existing 'e' to 'w' if there is\nsuch an index. If there are no indexes select-able, issue an error.\nIf we find an index with 'w', we have a candidate from a previous\nfailed command, so this gets used. I don't think that this design\nrequires a new DDL either as we could reset all 'w' and 'e' to 'd' if\nusing ALTER TABLE SET WITHOUT CLUSTER on a partitioned table. For\nCLUSTER with USING, the partitioned index selected and its leaves are\nswitched to 'w', similarly.\n- Then do the work for all the partitions, one partition per\ntransaction, keeping 'w'.\n- In a last transaction, switch all the partitions from 'w' to 'e',\nresetting any existing 'e'.\n\nALTER TABLE CLUSTER ON should also be a supported operation, where 'e'\ngets switched for all the partitions in a single transaction. Of\ncourse, this should not work for an invalid index. Even with such a\ndesign, I got to wonder if there could be cases where a user does\n*not* want to cluster the leaves of a partition tree with the same\nindex. If that were to happen, the partition tree may need a redesign\nso making things work so as we force partitions to inherit what's\nwanted for the partitioned table makes the most sense to me.\n\nBy the way, I mentioned that previously, but this thread got used for\nREINDEX that is finished, and now we have a discussion going on with\nCLUSTER. There is also stuff related to CIC and DIC. There is also\nonly one CF entry for all that. I really think that this work had\nbetter be split into separate threads, with separate CF entries. Do\nyou mind if I change the contents of the CF app so as the existing \nitem is renamed for REINDEX, that got committed? Then we could create\na new entry for CIC/DIC (it may make sense to split these two as\nwell, but I am not if there are any overlaps between the APIs of the\ntwo), and a new entry for CLUSTER, depending on the state of the work.\nThe original subject of the thread is also unrelated, this makes the\nreview process unnecessarily complicated, and that's really\nconfusing. Each discussion should go into its own, independent,\nthread.\n--\nMichael",
"msg_date": "Tue, 6 Oct 2020 11:42:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Tue, Oct 06, 2020 at 11:42:55AM +0900, Michael Paquier wrote:\n> On Mon, Oct 05, 2020 at 03:08:32PM -0500, Justin Pryzby wrote:\n> > It means that we might do N catalog updates for a partition heirarchy that's N\n> > levels deep. Normally, N=2, and we'd clear indisclustered for the index as\n> > well as its parent. This is not essential, though.\n> \n> Hmm. I got to think more about this one, and being able to ensure a\n> consistent state of indisclustered for partitioned tables and all\n> their partitions across multiple transactions at all times would be a\n> nice property, as we could easily track down if an operation has\n> failed in-flight. The problem here is that we are limited by\n> indisclustered being a boolean, so we may set indisclustered one way\n> or another on some partitions, or on some parent partitioned tables,\n> but we are not sure if what's down is actually clustered for the\n> leaves of a partitioned index, or not. Or maybe we even have an\n> inconsistent state, so this does not provide a good user experience.\n> The consistency of a partition tree is a key thing, and we have such \n> guarantees with REINDEX (with or without concurrently), so what I'd\n> like to think that what makes sense for indisclustered on a\n> partitioned index is to have the following set of guarantees, and\n> relying on indisvalid as being true iff an index partition tree is\n> complete on a given table partition tree is really important:\n> - If indisclustered is set to true for a partitioned index, then we\n> have the guarantee that all its partition and partitioned indexes have\n> indisclustered set to true, across all the layers down to the leaves.\n> - If indisclustered is set to false for a partitioned index, then we\n> have the guarantee that all its partition and partitioned indexes have\n> indisclustered set to false.\n> \n> If we happen to attach a new partition to a partitioned table of such\n> a tree, I guess that it is then logic to have indisclustered set to\n> the same value as the partitioned index when the partition inherits an\n> index. So, it seems to me that with the current catalogs we are\n> limited by indisclustered as being a boolean to maintain some kind of\n> consistency across transactions of CLUSTER, as we would use one\n> transaction per leaf for the work. In order to make things better\n> here, we could switch indisclustered to a char, able to use three\n> states:\n> - 'e' or enabled, equivalent to indisclustered == true. There should\n> be only one index on a table with 'e' set at a given time.\n> - 'd' or disabled, equivalent to indisclustered == false.\n> - a new third state, for an equivalent of work-in-progress, let's call\n> it 'w', or whatever.\n> \n> Then, when we begin a CLUSTER on a partitioned table, I would imagine\n> the following:\n> - Switch all the indexes selected to 'w' in a first transaction, and\n> do not reset the state of other indexes if there is one 'e'. For\n> CLUSTER without USING, we switch the existing 'e' to 'w' if there is\n> such an index. If there are no indexes select-able, issue an error.\n> If we find an index with 'w', we have a candidate from a previous\n> failed command, so this gets used. I don't think that this design\n> requires a new DDL either as we could reset all 'w' and 'e' to 'd' if\n> using ALTER TABLE SET WITHOUT CLUSTER on a partitioned table. For\n> CLUSTER with USING, the partitioned index selected and its leaves are\n> switched to 'w', similarly.\n> - Then do the work for all the partitions, one partition per\n> transaction, keeping 'w'.\n> - In a last transaction, switch all the partitions from 'w' to 'e',\n> resetting any existing 'e'.\n\nHonestly, I think you're over-thinking and over-engineering indisclustered.\n\nIf \"clusteredness\" was something we offered to maintain across DML, I think\nthat might be important to provide stronger guarantees. As it is now, I don't\nthink this patch is worth changing the catalog definition.\n\n> ALTER TABLE CLUSTER ON should also be a supported operation, where 'e'\n> gets switched for all the partitions in a single transaction. Of\n> course, this should not work for an invalid index. Even with such a\n> design, I got to wonder if there could be cases where a user does\n> *not* want to cluster the leaves of a partition tree with the same\n> index. If that were to happen, the partition tree may need a redesign\n> so making things work so as we force partitions to inherit what's\n> wanted for the partitioned table makes the most sense to me.\n\nI wondered the same thing: should clustering a parent index cause the the child\nindexes to be marked as clustered ? Or should it be possible for an\nintermediate child to say (marked as) clustered on some other index. I don't\nhave strong feelings, but the patch currently sets indisclustered as a natural\nconsequence of the implementation, so I tentatively think that's what's right.\nAfter all, CLUSTER sets indisclustered without having to also say\n\"ALTER..CLUSTER ON\".\n\nThis is relevant to the question I raised about unsetting indisclustered for\neach indexes *parent* when clustering on a different index.\n\nI think it would be strange if we refused \"ALTER..CLUSTER ON\" for a partition\njust because a different partitioned index was set clustered. We'd clear that,\nlike always, and then (in my proposal) also clear its parents \"indisclustered\".\nI still don't think that's essential, though.\n\n> By the way, I mentioned that previously, but this thread got used for\n> REINDEX that is finished, and now we have a discussion going on with\n> CLUSTER. There is also stuff related to CIC and DIC. There is also\n> only one CF entry for all that. I really think that this work had\n> better be split into separate threads, with separate CF entries. Do\n> you mind if I change the contents of the CF app so as the existing \n> item is renamed for REINDEX, that got committed? Then we could create\n> a new entry for CIC/DIC (it may make sense to split these two as\n> well, but I am not if there are any overlaps between the APIs of the\n> two), and a new entry for CLUSTER, depending on the state of the work.\n> The original subject of the thread is also unrelated, this makes the\n> review process unnecessarily complicated, and that's really\n> confusing. Each discussion should go into its own, independent,\n> thread.\n\nI didn't think it's worth the overhead of closing and opening more CFs.\nBut I don't mind.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Oct 2020 22:07:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
},
{
"msg_contents": "On Mon, Oct 05, 2020 at 10:07:33PM -0500, Justin Pryzby wrote:\n> Honestly, I think you're over-thinking and over-engineering indisclustered.\n> \n> If \"clusteredness\" was something we offered to maintain across DML, I think\n> that might be important to provide stronger guarantees. As it is now, I don't\n> think this patch is worth changing the catalog definition.\n\nWell, this use case is new because we are discussing the relationship\nof indisclustered across multiple transactions for multiple indexes,\nso I'd rather have this discussion than not, and I have learnt\nthe hard way with REINDEX that we should care a lot about the\nconsistency of partition trees at any step of the operation. Let's\nimagine a simple example here, take this partition tree: p (parent),\nand two partitions p1 and p2. p has two partitioned indexes i and j,\nindexes also present in p1 and p2 as i1, i2, j1 and j2. Let's assume\nthat the user has done a CLUSTER on p USING i that completes, meaning\nthat i, i1 and i2 have indisclustered set. Now let's assume that the\nuser does a CLUSTER on p USING j this time, and that this command\nfails while processing p2, meaning that indisclustered is set for j1,\ni2, and perhaps i or j depending on what the patch does. Per the\nlatest arguments, j would be the one set to indisclustered. From this\ninconsistent state comes a couple of interesting things:\n- A database-wide CLUSTER would finish by using j1 and i2 for the\noperation on the partitions, while the intention was to use j2 for the\nsecond partition as the previous command failed.\n- With CLUSTER p, without USING. Logically, I would assume that we\nwould rely on the value of indisclustered as of j, meaning that we\nwould *enforce* p2 to use j2. But it could also be seen as incorrect\nby the user because we would not use the index originally marked as\nsuch.\n\nSo keeping this consistent has the advantage to have clear rules\nhere.\n\n> I think it would be strange if we refused \"ALTER..CLUSTER ON\" for a partition\n> just because a different partitioned index was set clustered. We'd clear that,\n> like always, and then (in my proposal) also clear its parents \"indisclustered\".\n> I still don't think that's essential, though.\n\nWhy? Blocking a partition, which may be itself partitioned, to switch\nto a different index if its partitioned parent uses something else\nsounds kind of logic to me, at the end, because the user originally\nintended to use CLUSTER with a specific index on this tree. So I\nwould say that the partitioned table takes priority, and this should\nbe released with a WITHOUT CLUSTER from the partitioned table.\n\n> I didn't think it's worth the overhead of closing and opening more CFs.\n> But I don't mind.\n\nThanks, I'll do some cleanup.\n--\nMichael",
"msg_date": "Tue, 6 Oct 2020 13:38:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaaG8=?= =?utf-8?Q?w?= to create index\n concurrently on partitioned table"
}
] |
[
{
"msg_contents": "Hi\r\n\r\nOne czech Postgres user reported performance issue related to speed\r\nHashAggregate in nested loop.\r\n\r\nThe speed of 9.6\r\n\r\nHashAggregate (cost=27586.10..27728.66 rows=14256 width=24)\r\n(actual time=0.003..0.049 rows=39 loops=599203)\r\n\r\nThe speed of 10.7\r\n\r\nHashAggregate (cost=27336.78..27552.78 rows=21600 width=24)\r\n(actual time=0.011..0.156 rows=38 loops=597137)\r\n\r\nSo it looks so HashAgg is about 3x slower - with brutal nested loop it is a\r\nproblem.\r\n\r\nI wrote simple benchmark and really looks so our hash aggregate is slower\r\nand slower.\r\n\r\ncreate table foo(a int, b int, c int, d int, e int, f int);\r\ninsert into foo select random()*1000, random()*4, random()*4, random()* 2,\r\nrandom()*100, random()*100 from generate_series(1,2000000);\r\n\r\nanalyze foo;\r\n\r\n9.6.7\r\npostgres=# explain (analyze, buffers) select i from\r\ngenerate_series(1,500000) g(i) where exists (select count(*) cx from foo\r\ngroup by b, c, d having count(*) = i);\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Function Scan on generate_series g (cost=0.00..57739020.00 rows=500\r\nwidth=4) (actual time=807.485..3364.515 rows=74 loops=1) │\r\n│ Filter: (SubPlan 1)\r\n │\r\n│ Rows Removed by Filter: 499926\r\n │\r\n│ Buffers: shared hit=12739, temp read=856 written=855\r\n │\r\n│ SubPlan 1\r\n │\r\n│ -> HashAggregate (cost=57739.00..57739.75 rows=75 width=20) (actual\r\ntime=0.006..0.006 rows=0 loops=500000) │\r\n│ Group Key: foo.b, foo.c, foo.d\r\n │\r\n│ Filter: (count(*) = g.i)\r\n │\r\n│ Rows Removed by Filter: 75\r\n │\r\n│ Buffers: shared hit=12739\r\n │\r\n│ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000\r\nwidth=12) (actual time=0.015..139.736 rows=2000000 loops=1) │\r\n│ Buffers: shared hit=12739\r\n │\r\n│ Planning time: 0.276 ms\r\n │\r\n│ Execution time: 3365.758 ms\r\n │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(14 rows)\r\n\r\n10.9\r\n\r\npostgres=# explain (analyze, buffers) select i from\r\ngenerate_series(1,500000) g(i) where exists (select count(*) cx from foo\r\ngroup by b, c, d having count(*) = i);\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Function Scan on generate_series g (cost=0.00..57739020.00 rows=500\r\nwidth=4) (actual time=825.468..4919.063 rows=74 loops=1) │\r\n│ Filter: (SubPlan 1)\r\n │\r\n│ Rows Removed by Filter: 499926\r\n │\r\n│ Buffers: shared hit=12739, temp read=856 written=855\r\n │\r\n│ SubPlan 1\r\n │\r\n│ -> HashAggregate (cost=57739.00..57739.75 rows=75 width=20) (actual\r\ntime=0.009..0.009 rows=0 loops=500000) │\r\n│ Group Key: foo.b, foo.c, foo.d\r\n │\r\n│ Filter: (count(*) = g.i)\r\n │\r\n│ Rows Removed by Filter: 75\r\n │\r\n│ Buffers: shared hit=12739\r\n │\r\n│ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000\r\nwidth=12) (actual time=0.025..157.887 rows=2000000 loops=1) │\r\n│ Buffers: shared hit=12739\r\n │\r\n│ Planning time: 0.829 ms\r\n │\r\n│ Execution time: 4920.800 ms\r\n │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(14 rows)\r\n\r\nmaster\r\n\r\n\r\npostgres=# explain (analyze, buffers) select i from\r\ngenerate_series(1,500000) g(i) where exists (select count(*) cx from foo\r\ngroup by b, c, d having count(*) = i);\r\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n│ QUERY PLAN\r\n\r\n╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n│ Function Scan on generate_series g (cost=0.00..28869973750.00\r\nrows=250000 width=4) (actual time=901.639..6057.943 rows=74 loops=1)\r\n│ Filter: (SubPlan 1)\r\n\r\n│ Rows Removed by Filter: 499926\r\n\r\n│ Buffers: shared hit=12739, temp read=855 written=855\r\n\r\n│ SubPlan 1\r\n\r\n│ -> HashAggregate (cost=57739.00..57739.94 rows=1 width=20) (actual\r\ntime=0.012..0.012 rows=0 loops=500000)\r\n│ Group Key: foo.b, foo.c, foo.d\r\n\r\n│ Filter: (count(*) = g.i)\r\n\r\n│ Peak Memory Usage: 37 kB\r\n\r\n│ Rows Removed by Filter: 75\r\n\r\n│ Buffers: shared hit=12739\r\n\r\n│ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000\r\nwidth=12) (actual time=0.017..262.497 rows=2000000 loops=1)\r\n│ Buffers: shared hit=12739\r\n\r\n│ Planning Time: 0.275 ms\r\n\r\n│ Buffers: shared hit=1\r\n\r\n│ Execution Time: 6059.266 ms\r\n\r\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n(16 rows)\r\n\r\nRegards\r\n\r\nPavel\r\n\nHiOne czech Postgres user reported performance issue related to speed HashAggregate in nested loop.The speed of 9.6HashAggregate (cost=27586.10..27728.66 rows=14256 width=24)(actual time=0.003..0.049 rows=39 loops=599203)The speed of 10.7HashAggregate (cost=27336.78..27552.78 rows=21600 width=24) (actual time=0.011..0.156 rows=38 loops=597137)So it looks so HashAgg is about 3x slower - with brutal nested loop it is a problem.I wrote simple benchmark and really looks so our hash aggregate is slower and slower.create table foo(a int, b int, c int, d int, e int, f int);insert into foo select random()*1000, random()*4, random()*4, random()* 2, random()*100, random()*100 from generate_series(1,2000000);analyze foo;9.6.7postgres=# explain (analyze, buffers) select i from generate_series(1,500000) g(i) where exists (select count(*) cx from foo group by b, c, d having count(*) = i);┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Function Scan on generate_series g (cost=0.00..57739020.00 rows=500 width=4) (actual time=807.485..3364.515 rows=74 loops=1) ││ Filter: (SubPlan 1) ││ Rows Removed by Filter: 499926 ││ Buffers: shared hit=12739, temp read=856 written=855 ││ SubPlan 1 ││ -> HashAggregate (cost=57739.00..57739.75 rows=75 width=20) (actual time=0.006..0.006 rows=0 loops=500000) ││ Group Key: foo.b, foo.c, foo.d ││ Filter: (count(*) = g.i) ││ Rows Removed by Filter: 75 ││ Buffers: shared hit=12739 ││ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000 width=12) (actual time=0.015..139.736 rows=2000000 loops=1) ││ Buffers: shared hit=12739 ││ Planning time: 0.276 ms ││ Execution time: 3365.758 ms │└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(14 rows)10.9postgres=# explain (analyze, buffers) select i from generate_series(1,500000) g(i) where exists (select count(*) cx from foo group by b, c, d having count(*) = i);┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Function Scan on generate_series g (cost=0.00..57739020.00 rows=500 width=4) (actual time=825.468..4919.063 rows=74 loops=1) ││ Filter: (SubPlan 1) ││ Rows Removed by Filter: 499926 ││ Buffers: shared hit=12739, temp read=856 written=855 ││ SubPlan 1 ││ -> HashAggregate (cost=57739.00..57739.75 rows=75 width=20) (actual time=0.009..0.009 rows=0 loops=500000) ││ Group Key: foo.b, foo.c, foo.d ││ Filter: (count(*) = g.i) ││ Rows Removed by Filter: 75 ││ Buffers: shared hit=12739 ││ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000 width=12) (actual time=0.025..157.887 rows=2000000 loops=1) ││ Buffers: shared hit=12739 ││ Planning time: 0.829 ms ││ Execution time: 4920.800 ms │└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(14 rows)masterpostgres=# explain (analyze, buffers) select i from generate_series(1,500000) g(i) where exists (select count(*) cx from foo group by b, c, d having count(*) = i);┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────│ QUERY PLAN ╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════│ Function Scan on generate_series g (cost=0.00..28869973750.00 rows=250000 width=4) (actual time=901.639..6057.943 rows=74 loops=1) │ Filter: (SubPlan 1) │ Rows Removed by Filter: 499926 │ Buffers: shared hit=12739, temp read=855 written=855 │ SubPlan 1 │ -> HashAggregate (cost=57739.00..57739.94 rows=1 width=20) (actual time=0.012..0.012 rows=0 loops=500000) │ Group Key: foo.b, foo.c, foo.d │ Filter: (count(*) = g.i) │ Peak Memory Usage: 37 kB │ Rows Removed by Filter: 75 │ Buffers: shared hit=12739 │ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000 width=12) (actual time=0.017..262.497 rows=2000000 loops=1) │ Buffers: shared hit=12739 │ Planning Time: 0.275 ms │ Buffers: shared hit=1 │ Execution Time: 6059.266 ms └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────(16 rows)RegardsPavel",
"msg_date": "Wed, 3 Jun 2020 17:32:47 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "significant slowdown of HashAggregate between 9.6 and 10"
},
{
"msg_contents": "st 3. 6. 2020 v 17:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\r\nnapsal:\r\n\r\n> Hi\r\n>\r\n> One czech Postgres user reported performance issue related to speed\r\n> HashAggregate in nested loop.\r\n>\r\n> The speed of 9.6\r\n>\r\n> HashAggregate (cost=27586.10..27728.66 rows=14256 width=24)\r\n> (actual time=0.003..0.049 rows=39 loops=599203)\r\n>\r\n> The speed of 10.7\r\n>\r\n> HashAggregate (cost=27336.78..27552.78 rows=21600 width=24)\r\n> (actual time=0.011..0.156 rows=38 loops=597137)\r\n>\r\n> So it looks so HashAgg is about 3x slower - with brutal nested loop it is\r\n> a problem.\r\n>\r\n> I wrote simple benchmark and really looks so our hash aggregate is slower\r\n> and slower.\r\n>\r\n> create table foo(a int, b int, c int, d int, e int, f int);\r\n> insert into foo select random()*1000, random()*4, random()*4, random()* 2,\r\n> random()*100, random()*100 from generate_series(1,2000000);\r\n>\r\n> analyze foo;\r\n>\r\n> 9.6.7\r\n> postgres=# explain (analyze, buffers) select i from\r\n> generate_series(1,500000) g(i) where exists (select count(*) cx from foo\r\n> group by b, c, d having count(*) = i);\r\n>\r\n> ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │ QUERY PLAN\r\n> │\r\n>\r\n> ╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> │ Function Scan on generate_series g (cost=0.00..57739020.00 rows=500\r\n> width=4) (actual time=807.485..3364.515 rows=74 loops=1) │\r\n> │ Filter: (SubPlan 1)\r\n> │\r\n> │ Rows Removed by Filter: 499926\r\n> │\r\n> │ Buffers: shared hit=12739, temp read=856 written=855\r\n> │\r\n> │ SubPlan 1\r\n> │\r\n> │ -> HashAggregate (cost=57739.00..57739.75 rows=75 width=20)\r\n> (actual time=0.006..0.006 rows=0 loops=500000) │\r\n> │ Group Key: foo.b, foo.c, foo.d\r\n> │\r\n> │ Filter: (count(*) = g.i)\r\n> │\r\n> │ Rows Removed by Filter: 75\r\n> │\r\n> │ Buffers: shared hit=12739\r\n> │\r\n> │ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000\r\n> width=12) (actual time=0.015..139.736 rows=2000000 loops=1) │\r\n> │ Buffers: shared hit=12739\r\n> │\r\n> │ Planning time: 0.276 ms\r\n> │\r\n> │ Execution time: 3365.758 ms\r\n> │\r\n>\r\n> └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (14 rows)\r\n>\r\n> 10.9\r\n>\r\n> postgres=# explain (analyze, buffers) select i from\r\n> generate_series(1,500000) g(i) where exists (select count(*) cx from foo\r\n> group by b, c, d having count(*) = i);\r\n>\r\n> ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │ QUERY PLAN\r\n> │\r\n>\r\n> ╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> │ Function Scan on generate_series g (cost=0.00..57739020.00 rows=500\r\n> width=4) (actual time=825.468..4919.063 rows=74 loops=1) │\r\n> │ Filter: (SubPlan 1)\r\n> │\r\n> │ Rows Removed by Filter: 499926\r\n> │\r\n> │ Buffers: shared hit=12739, temp read=856 written=855\r\n> │\r\n> │ SubPlan 1\r\n> │\r\n> │ -> HashAggregate (cost=57739.00..57739.75 rows=75 width=20)\r\n> (actual time=0.009..0.009 rows=0 loops=500000) │\r\n> │ Group Key: foo.b, foo.c, foo.d\r\n> │\r\n> │ Filter: (count(*) = g.i)\r\n> │\r\n> │ Rows Removed by Filter: 75\r\n> │\r\n> │ Buffers: shared hit=12739\r\n> │\r\n> │ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000\r\n> width=12) (actual time=0.025..157.887 rows=2000000 loops=1) │\r\n> │ Buffers: shared hit=12739\r\n> │\r\n> │ Planning time: 0.829 ms\r\n> │\r\n> │ Execution time: 4920.800 ms\r\n> │\r\n>\r\n> └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (14 rows)\r\n>\r\n> master\r\n>\r\n>\r\n> postgres=# explain (analyze, buffers) select i from\r\n> generate_series(1,500000) g(i) where exists (select count(*) cx from foo\r\n> group by b, c, d having count(*) = i);\r\n>\r\n> ┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n> │ QUERY PLAN\r\n>\r\n>\r\n> ╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n> │ Function Scan on generate_series g (cost=0.00..28869973750.00\r\n> rows=250000 width=4) (actual time=901.639..6057.943 rows=74 loops=1)\r\n> │ Filter: (SubPlan 1)\r\n>\r\n> │ Rows Removed by Filter: 499926\r\n>\r\n> │ Buffers: shared hit=12739, temp read=855 written=855\r\n>\r\n> │ SubPlan 1\r\n>\r\n> │ -> HashAggregate (cost=57739.00..57739.94 rows=1 width=20) (actual\r\n> time=0.012..0.012 rows=0 loops=500000)\r\n> │ Group Key: foo.b, foo.c, foo.d\r\n>\r\n> │ Filter: (count(*) = g.i)\r\n>\r\n> │ Peak Memory Usage: 37 kB\r\n>\r\n> │ Rows Removed by Filter: 75\r\n>\r\n> │ Buffers: shared hit=12739\r\n>\r\n> │ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000\r\n> width=12) (actual time=0.017..262.497 rows=2000000 loops=1)\r\n> │ Buffers: shared hit=12739\r\n>\r\n> │ Planning Time: 0.275 ms\r\n>\r\n> │ Buffers: shared hit=1\r\n>\r\n> │ Execution Time: 6059.266 ms\r\n>\r\n>\r\n> └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n> (16 rows)\r\n>\r\n> Regards\r\n>\r\n\r\nI tried to run same query on half data size, and the performance is almost\r\nsame. Probably the performance issue can be related to initialization or\r\nfinalization of aggregation.\r\n\r\nPavel\r\n\r\n\r\n>\r\n> Pavel\r\n>\r\n\nst 3. 6. 2020 v 17:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiOne czech Postgres user reported performance issue related to speed HashAggregate in nested loop.The speed of 9.6HashAggregate (cost=27586.10..27728.66 rows=14256 width=24)(actual time=0.003..0.049 rows=39 loops=599203)The speed of 10.7HashAggregate (cost=27336.78..27552.78 rows=21600 width=24) (actual time=0.011..0.156 rows=38 loops=597137)So it looks so HashAgg is about 3x slower - with brutal nested loop it is a problem.I wrote simple benchmark and really looks so our hash aggregate is slower and slower.create table foo(a int, b int, c int, d int, e int, f int);insert into foo select random()*1000, random()*4, random()*4, random()* 2, random()*100, random()*100 from generate_series(1,2000000);analyze foo;9.6.7postgres=# explain (analyze, buffers) select i from generate_series(1,500000) g(i) where exists (select count(*) cx from foo group by b, c, d having count(*) = i);┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Function Scan on generate_series g (cost=0.00..57739020.00 rows=500 width=4) (actual time=807.485..3364.515 rows=74 loops=1) ││ Filter: (SubPlan 1) ││ Rows Removed by Filter: 499926 ││ Buffers: shared hit=12739, temp read=856 written=855 ││ SubPlan 1 ││ -> HashAggregate (cost=57739.00..57739.75 rows=75 width=20) (actual time=0.006..0.006 rows=0 loops=500000) ││ Group Key: foo.b, foo.c, foo.d ││ Filter: (count(*) = g.i) ││ Rows Removed by Filter: 75 ││ Buffers: shared hit=12739 ││ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000 width=12) (actual time=0.015..139.736 rows=2000000 loops=1) ││ Buffers: shared hit=12739 ││ Planning time: 0.276 ms ││ Execution time: 3365.758 ms │└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(14 rows)10.9postgres=# explain (analyze, buffers) select i from generate_series(1,500000) g(i) where exists (select count(*) cx from foo group by b, c, d having count(*) = i);┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Function Scan on generate_series g (cost=0.00..57739020.00 rows=500 width=4) (actual time=825.468..4919.063 rows=74 loops=1) ││ Filter: (SubPlan 1) ││ Rows Removed by Filter: 499926 ││ Buffers: shared hit=12739, temp read=856 written=855 ││ SubPlan 1 ││ -> HashAggregate (cost=57739.00..57739.75 rows=75 width=20) (actual time=0.009..0.009 rows=0 loops=500000) ││ Group Key: foo.b, foo.c, foo.d ││ Filter: (count(*) = g.i) ││ Rows Removed by Filter: 75 ││ Buffers: shared hit=12739 ││ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000 width=12) (actual time=0.025..157.887 rows=2000000 loops=1) ││ Buffers: shared hit=12739 ││ Planning time: 0.829 ms ││ Execution time: 4920.800 ms │└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(14 rows)masterpostgres=# explain (analyze, buffers) select i from generate_series(1,500000) g(i) where exists (select count(*) cx from foo group by b, c, d having count(*) = i);┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────│ QUERY PLAN ╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════│ Function Scan on generate_series g (cost=0.00..28869973750.00 rows=250000 width=4) (actual time=901.639..6057.943 rows=74 loops=1) │ Filter: (SubPlan 1) │ Rows Removed by Filter: 499926 │ Buffers: shared hit=12739, temp read=855 written=855 │ SubPlan 1 │ -> HashAggregate (cost=57739.00..57739.94 rows=1 width=20) (actual time=0.012..0.012 rows=0 loops=500000) │ Group Key: foo.b, foo.c, foo.d │ Filter: (count(*) = g.i) │ Peak Memory Usage: 37 kB │ Rows Removed by Filter: 75 │ Buffers: shared hit=12739 │ -> Seq Scan on foo (cost=0.00..32739.00 rows=2000000 width=12) (actual time=0.017..262.497 rows=2000000 loops=1) │ Buffers: shared hit=12739 │ Planning Time: 0.275 ms │ Buffers: shared hit=1 │ Execution Time: 6059.266 ms └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────(16 rows)RegardsI tried to run same query on half data size, and the performance is almost same. Probably the performance issue can be related to initialization or finalization of aggregation.Pavel Pavel",
"msg_date": "Wed, 3 Jun 2020 17:43:08 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: significant slowdown of HashAggregate between 9.6 and 10"
},
{
"msg_contents": "Hi,\n\nNot sure what's the root cause, but I can reproduce it. Timings for 9.6,\n10 and master (all built from git with the same options) without explain\nanalyze look like this:\n\n9.6\n-----------------\nTime: 1971.314 ms\nTime: 1995.875 ms\nTime: 1997.408 ms\nTime: 2069.913 ms\nTime: 2004.196 ms\n\n10\n-----------------------------\nTime: 2815.434 ms (00:02.815)\nTime: 2862.589 ms (00:02.863)\nTime: 2841.126 ms (00:02.841)\nTime: 2803.040 ms (00:02.803)\nTime: 2805.527 ms (00:02.806)\n\nmaster\n-----------------------------\nTime: 3479.233 ms (00:03.479)\nTime: 3537.901 ms (00:03.538)\nTime: 3459.314 ms (00:03.459)\nTime: 3542.810 ms (00:03.543)\nTime: 3482.141 ms (00:03.482)\n\nSo there seems to be +40% between 9.6 and 10, and further +25% between\n10 and master. However, plain hashagg, measured e.g. like this:\n\n select count(*) cx from foo group by b, c, d having count(*) = 1;\n\ndoes not indicate any slowdown at all, so I think you're right it has\nsomething to do with the looping.\n\nProfiles from those versions look like this:\n\n9.6\n---------------------------------------------------------\nSamples\nOverhead Shared Objec Symbol\n 14.19% postgres [.] ExecMakeFunctionResultNoSets\n 13.65% postgres [.] finalize_aggregates\n 12.54% postgres [.] hash_seq_search\n 6.70% postgres [.] finalize_aggregate.isra.0\n 5.71% postgres [.] ExecEvalParamExec\n 5.54% postgres [.] ExecEvalAggref\n 5.00% postgres [.] ExecStoreMinimalTuple\n 4.34% postgres [.] ExecAgg\n 4.08% postgres [.] ExecQual\n 2.67% postgres [.] slot_deform_tuple\n 2.24% postgres [.] pgstat_init_function_usage\n 2.22% postgres [.] check_stack_depth\n 2.14% postgres [.] MemoryContextReset\n 1.89% postgres [.] hash_search_with_hash_value\n 1.72% postgres [.] project_aggregates\n 1.68% postgres [.] pgstat_end_function_usage\n 1.59% postgres [.] slot_getattr\n\n\n10\n------------------------------------------------------------\nSamples\nOverhead Shared Object Symbol\n 15.18% postgres [.] slot_deform_tuple\n 13.09% postgres [.] agg_retrieve_hash_table\n 12.02% postgres [.] ExecInterpExpr\n 7.47% postgres [.] finalize_aggregates\n 7.38% postgres [.] tuplehash_iterate\n 5.13% postgres [.] prepare_projection_slot\n 4.86% postgres [.] finalize_aggregate.isra.0\n 4.05% postgres [.] bms_is_member\n 3.97% postgres [.] slot_getallattrs\n 3.59% postgres [.] ExecStoreMinimalTuple\n 2.85% postgres [.] project_aggregates\n 1.95% postgres [.] ExecClearTuple\n 1.71% libc-2.30.so [.] __memset_avx2_unaligned_erms\n 1.69% postgres [.] ExecEvalParamExec\n 1.58% postgres [.] MemoryContextReset\n 1.17% postgres [.] slot_getattr\n 1.03% postgres [.] slot_getsomeattrs\n\n\nmaster\n--------------------------------------------------------------\nSamples\nOverhead Shared Object Symbol\n 17.07% postgres [.] agg_retrieve_hash_table\n 15.46% postgres [.] tuplehash_iterate\n 11.83% postgres [.] tts_minimal_getsomeattrs\n 9.39% postgres [.] ExecInterpExpr\n 6.94% postgres [.] prepare_projection_slot\n 4.85% postgres [.] finalize_aggregates\n 4.27% postgres [.] bms_is_member\n 3.80% postgres [.] finalize_aggregate.isra.0\n 3.80% postgres [.] tts_minimal_store_tuple\n 2.22% postgres [.] project_aggregates\n 2.07% postgres [.] tts_virtual_clear\n 2.07% postgres [.] MemoryContextReset\n 1.78% postgres [.] tts_minimal_clear\n 1.61% postgres [.] ExecEvalParamExec\n 1.46% postgres [.] slot_getsomeattrs_int\n 1.34% libc-2.30.so [.] __memset_avx2_unaligned_erms\n\nNot sure what to think about this. Seems slot_deform_tuple got way more\nexpensive between 9.6 and 10, for some reason. \n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 3 Jun 2020 21:31:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: significant slowdown of HashAggregate between 9.6 and 10"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-03 21:31:01 +0200, Tomas Vondra wrote:\n> So there seems to be +40% between 9.6 and 10, and further +25% between\n> 10 and master. However, plain hashagg, measured e.g. like this:\n\nUgh.\n\nSince I am a likely culprit, I'm taking a look.\n\n\n> Not sure what to think about this. Seems slot_deform_tuple got way more\n> expensive between 9.6 and 10, for some reason.\n\nMight indicate too many calls instead. Or it could just be the fact that\nwe have expressions deform all columns once, instead of deforming\ncolumns one-by-one now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Jun 2020 13:26:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: significant slowdown of HashAggregate between 9.6 and 10"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-03 13:26:43 -0700, Andres Freund wrote:\n> On 2020-06-03 21:31:01 +0200, Tomas Vondra wrote:\n> > So there seems to be +40% between 9.6 and 10, and further +25% between\n> > 10 and master. However, plain hashagg, measured e.g. like this:\n\nAs far as I can tell the 10->master difference comes largely from the\ndifference of the number of buckets in the hashtable.\n\nIn 10 it is:\nBreakpoint 1, tuplehash_create (ctx=0x5628251775c8, nelements=75, private_data=0x5628251952f0)\nand in master it is:\nBreakpoint 1, tuplehash_create (ctx=0x5628293a0a70, nelements=256, private_data=0x5628293a0b90)\n\nAs far as I can tell the timing difference simply is the cost of\niterating 500k times over a hashtable with fairly few entries. Which is,\nunsurprisingly, more expensive if the hashtable is larger.\n\nThe reason the hashtable got bigger in 12 is\n\ncommit 1f39bce021540fde00990af55b4432c55ef4b3c7\nAuthor: Jeff Davis <jdavis@postgresql.org>\nDate: 2020-03-18 15:42:02 -0700\n\n Disk-based Hash Aggregation.\n\nwhich introduced\n\n+/* minimum number of initial hash table buckets */\n+#define HASHAGG_MIN_BUCKETS 256\n\n\nI don't really see much explanation for that part in the commit, perhaps\nJeff can chime in?\n\n\nI think optimizing for the gazillion hash table scans isn't particularly\nimportant. Rarely is a query going to have 500k scans of unchanging\naggregated data. So I'm not too concerned about the 13 regression - but\nI also see very little reason to just always use 256 buckets? It's\npretty darn common to end up with 1-2 groups, what's the point of this?\n\n\nI'll look into 9.6->10 after buying groceries... But I'd wish there were\na relevant benchmark, I don't think it's worth optimizing for this case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Jun 2020 11:41:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: significant slowdown of HashAggregate between 9.6 and 10"
},
{
"msg_contents": "On Thu, 2020-06-04 at 11:41 -0700, Andres Freund wrote:\n> +/* minimum number of initial hash table buckets */\n> +#define HASHAGG_MIN_BUCKETS 256\n> \n> \n> I don't really see much explanation for that part in the commit,\n> perhaps\n> Jeff can chime in?\n\nI did this in response to a review comment (point #5):\n\n\nhttps://www.postgresql.org/message-id/20200219191636.gvdywx32kwbix6kd@development\n\nTomas suggested a min of 1024, and I thought I was being more\nconservative choosing 256. Still too high, I guess?\n\nI can lower it. What do you think is a reasonable minimum?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 04 Jun 2020 18:22:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: significant slowdown of HashAggregate between 9.6 and 10"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-04 18:22:03 -0700, Jeff Davis wrote:\n> On Thu, 2020-06-04 at 11:41 -0700, Andres Freund wrote:\n> > +/* minimum number of initial hash table buckets */\n> > +#define HASHAGG_MIN_BUCKETS 256\n> > \n> > \n> > I don't really see much explanation for that part in the commit,\n> > perhaps\n> > Jeff can chime in?\n> \n> I did this in response to a review comment (point #5):\n> \n> \n> https://www.postgresql.org/message-id/20200219191636.gvdywx32kwbix6kd@development\n> \n> Tomas suggested a min of 1024, and I thought I was being more\n> conservative choosing 256. Still too high, I guess?\n\n> I can lower it. What do you think is a reasonable minimum?\n\nI don't really see why there needs to be a minimum bigger than 1?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Jun 2020 18:57:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: significant slowdown of HashAggregate between 9.6 and 10"
},
{
"msg_contents": "On Thu, Jun 04, 2020 at 06:57:58PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2020-06-04 18:22:03 -0700, Jeff Davis wrote:\n>> On Thu, 2020-06-04 at 11:41 -0700, Andres Freund wrote:\n>> > +/* minimum number of initial hash table buckets */\n>> > +#define HASHAGG_MIN_BUCKETS 256\n>> >\n>> >\n>> > I don't really see much explanation for that part in the commit,\n>> > perhaps\n>> > Jeff can chime in?\n>>\n>> I did this in response to a review comment (point #5):\n>>\n>>\n>> https://www.postgresql.org/message-id/20200219191636.gvdywx32kwbix6kd@development\n>>\n>> Tomas suggested a min of 1024, and I thought I was being more\n>> conservative choosing 256. Still too high, I guess?\n>\n>> I can lower it. What do you think is a reasonable minimum?\n>\n>I don't really see why there needs to be a minimum bigger than 1?\n>\n\nI think you're right. I think I was worried about having to resize the\nhash table in case of an under-estimate, and it seemed fine to waste a\ntiny bit more memory to prevent that. But this example shows we may need\nto scan the hash table sequentially, which means it's not just about\nmemory consumption. So in hindsight we either don't need the limit at\nall, or maybe it could be much lower (IIRC it reduces probability of\ncollision, but maybe dynahash does that anyway internally).\n\nI wonder if hashjoin has the same issue, but probably not - I don't\nthink we'll ever scan that internal hash table sequentially.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Jun 2020 15:25:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: significant slowdown of HashAggregate between 9.6 and 10"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-05 15:25:26 +0200, Tomas Vondra wrote:\n> I think you're right. I think I was worried about having to resize the\n> hash table in case of an under-estimate, and it seemed fine to waste a\n> tiny bit more memory to prevent that.\n\nIt's pretty cheap to resize a hashtable with a handful of entries, so I'm not\nworried about that. It's also how it has worked for a *long* time, so I think\nunless we have some good reason to change that, I wouldn't.\n\n\n> But this example shows we may need to scan the hash table\n> sequentially, which means it's not just about memory consumption.\n\nWe *always* scan the hashtable sequentially, no? Otherwise there's no way to\nget at the aggregated data.\n\n\n> So in hindsight we either don't need the limit at all, or maybe it\n> could be much lower (IIRC it reduces probability of collision, but\n> maybe dynahash does that anyway internally).\n\nThis is simplehash using code. Which resizes on a load factor of 0.9.\n\n\n> I wonder if hashjoin has the same issue, but probably not - I don't\n> think we'll ever scan that internal hash table sequentially.\n\nI think we do for some outer joins (c.f. ExecPrepHashTableForUnmatched()), but\nit's probably not relevant performance-wise.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jun 2020 09:33:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: significant slowdown of HashAggregate between 9.6 and 10"
}
] |
[
{
"msg_contents": "Hackers,\n\nThe name \"relkind\" normally refers to a field of type 'char' with values like 'r' for \"table\" and 'i' for \"index\". In AlterTableStmt and CreateTableAsStmt, this naming convention was abused for a field of type enum ObjectType. Often, such fields are named \"objtype\", though also \"kind\", \"removeType\", \"renameType\", etc.\n\nI don't care to debate those other names, though in passing I'll say that \"kind\" seems not great. The \"relkind\" name is particularly bad, though. It is confusing in functions that also operate on a RangeTblEntry object, which also has a field named \"relkind\", and is confusing in light of the function get_relkind_objtype() which maps from \"relkind\" to \"objtype\", implying quite correctly that those two things are distinct.\n\nThe attached patch cleans this up. How many toes am I stepping on here? Changing the names was straightforward and doesn't seem to cause any issues with 'make check-world'. Any objection?\n\nFor those interested in the larger context of this patch, I am trying to clean up any part of the code that makes it harder to write and test new access methods. When implementing a new AM, one currently needs to `grep -i relkind` to find a long list of files that need special treatment. One then needs to consider whether special logic for the new AM needs to be inserted into all these spots. As such, it is nice if these spots have as little confusing naming as possible. This patch makes that process a little easier. I have another patch (to be posted shortly) that cleans up the #define RELKIND_XXX stuff using a new RelKind enum and special macros while keeping the relkind fields as type 'char'. Along with converting code to use switch(relkind) rather than if (relkind...) statements, the compiler now warns on unhandled cases when you add a new RelKind to the list, making it easier to find all the places you need to update. I decided to keep that work independent of this patch, as the code is logically distinct.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 3 Jun 2020 10:05:26 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Towards easier AMs: Cleaning up inappropriate use of name \"relkind\""
},
{
"msg_contents": "On 2020-Jun-03, Mark Dilger wrote:\n\n> The name \"relkind\" normally refers to a field of type 'char' with\n> values like 'r' for \"table\" and 'i' for \"index\". In AlterTableStmt\n> and CreateTableAsStmt, this naming convention was abused for a field\n> of type enum ObjectType.\n\nI agree that \"relkind\" here is a misnomer, and I bet that what happened\nhere is that the original patch Gavin developed was using the relkind\nenum from pg_class and was later changed to the OBJECT_ defines after\npatch review, but the struct member name remained. I don't object to\nthe proposed renaming.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 3 Jun 2020 13:26:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "> On 3 Jun 2020, at 19:05, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> The attached patch cleans this up.\n\nThe gram.y hunks in this patch no longer applies, please submit a rebased\nversion. I'm marking the entry Waiting on Author in the meantime.\n\ncheers ./daniel\n\n\n",
"msg_date": "Wed, 1 Jul 2020 11:45:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "> On Jul 1, 2020, at 2:45 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 3 Jun 2020, at 19:05, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> The attached patch cleans this up.\n> \n> The gram.y hunks in this patch no longer applies, please submit a rebased\n> version. I'm marking the entry Waiting on Author in the meantime.\n\nRebased patch attached. Thanks for mentioning it!\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 1 Jul 2020 09:46:34 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "> On Jun 3, 2020, at 10:05 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I have another patch (to be posted shortly) that cleans up the #define RELKIND_XXX stuff using a new RelKind enum and special macros while keeping the relkind fields as type 'char'. Along with converting code to use switch(relkind) rather than if (relkind...) statements, the compiler now warns on unhandled cases when you add a new RelKind to the list, making it easier to find all the places you need to update. I decided to keep that work independent of this patch, as the code is logically distinct.\n\nMost of the work in this patch is mechanical replacement of if/else if/else statements which hinge on relkind to switch statements on relkind. The patch is not philosophically very interesting, but it is fairly long. Reviewers might start by scrolling down the patch to the changes in src/include/catalog/pg_class.h\n\nThere are no intentional behavioral changes in this patch.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 1 Jul 2020 17:04:19 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "On Wed, Jul 01, 2020 at 09:46:34AM -0700, Mark Dilger wrote:\n> Rebased patch attached. Thanks for mentioning it!\n\nThere are two patches on this thread v2-0001 being much smaller than\nv2-0002. I have looked at 0001 for now, and, like Alvaro, this\nrenaming makes sense to me. Those commands work on objects that are\nrelkinds, except for one OBJECT_TYPE. So, let's get 0001 patch\nmerged. Any objections from others?\n--\nMichael",
"msg_date": "Wed, 8 Jul 2020 22:00:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "On Wed, Jul 08, 2020 at 10:00:47PM +0900, Michael Paquier wrote:\n> There are two patches on this thread v2-0001 being much smaller than\n> v2-0002. I have looked at 0001 for now, and, like Alvaro, this\n> renaming makes sense to me. Those commands work on objects that are\n> relkinds, except for one OBJECT_TYPE. So, let's get 0001 patch\n> merged. Any objections from others?\n\nI have been through this one again and applied it as cc35d89.\n--\nMichael",
"msg_date": "Sat, 11 Jul 2020 13:44:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "On Wed, Jul 01, 2020 at 05:04:19PM -0700, Mark Dilger wrote:\n> Most of the work in this patch is mechanical replacement of if/else\n> if/else statements which hinge on relkind to switch statements on\n> relkind. The patch is not philosophically very interesting, but it\n> is fairly long. Reviewers might start by scrolling down the patch\n> to the changes in src/include/catalog/pg_class.h \n> \n> There are no intentional behavioral changes in this patch.\n\nPlease note that 0002 does not apply anymore, there are conflicts in\nheap.c and tablecmds.c, and that there are noise diffs, likely because\nyou ran pgindent and included places unrelated to this patch. Anyway,\nI am not really a fan of this patch. I could see a benefit in\nswitching to an enum so as for places where we use a switch/case\nwithout a default we would be warned if a new relkind gets added or if\na value is not covered, but then we should not really need\nRELKIND_NULL, no?\n--\nMichael",
"msg_date": "Sat, 11 Jul 2020 15:00:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "\n\n> On Jul 10, 2020, at 9:44 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Jul 08, 2020 at 10:00:47PM +0900, Michael Paquier wrote:\n>> There are two patches on this thread v2-0001 being much smaller than\n>> v2-0002. I have looked at 0001 for now, and, like Alvaro, this\n>> renaming makes sense to me. Those commands work on objects that are\n>> relkinds, except for one OBJECT_TYPE. So, let's get 0001 patch\n>> merged. Any objections from others?\n> \n> I have been through this one again and applied it as cc35d89.\n> --\n> Michael\n\nThanks for committing!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 11 Jul 2020 09:23:34 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "\n\n> On Jul 10, 2020, at 11:00 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Jul 01, 2020 at 05:04:19PM -0700, Mark Dilger wrote:\n>> Most of the work in this patch is mechanical replacement of if/else\n>> if/else statements which hinge on relkind to switch statements on\n>> relkind. The patch is not philosophically very interesting, but it\n>> is fairly long. Reviewers might start by scrolling down the patch\n>> to the changes in src/include/catalog/pg_class.h \n>> \n>> There are no intentional behavioral changes in this patch.\n> \n> Please note that 0002 does not apply anymore, there are conflicts in\n> heap.c and tablecmds.c, and that there are noise diffs, likely because\n> you ran pgindent and included places unrelated to this patch.\n\nI can resubmit, but should like to address your second point before bothering...\n\n> Anyway,\n> I am not really a fan of this patch. I could see a benefit in\n> switching to an enum so as for places where we use a switch/case\n> without a default we would be warned if a new relkind gets added or if\n> a value is not covered, but then we should not really need\n> RELKIND_NULL, no?\n\nThere are code paths where relkind is sometimes '\\0' under normal, non-exceptional conditions. This happens in\n\n\tallpaths.c: set_append_rel_size\n\trewriteHandler.c: view_query_is_auto_updatable\n\tlockcmds.c: LockViewRecurse_walker\n\tpg_depend.c: getOwnedSequences_internal\n\nDoesn't this justify having RELKIND_NULL in the enum?\n\nIt is not the purpose of this patch to change the behavior of the code. This is just a structural patch, using an enum and switches rather than char and if/else if/else blocks.\n\nSubsequent patches could build on this work, such as changing the behavior when code encounters a relkind value outside the code's expected set of relkind values. Whether those patches would add Assert()s, elog()s, or ereport()s is not something I'd like to have to debate as part of this patch submission. Assert()s have the advantage of costing nothing in production builds, but elog()s have the advantage of protecting against corrupt relkind values at runtime in production.\n\nGetting the compiler to warn when a new relkind is added to the enumeration but not handled in a switch is difficult. One strategy is to add -Wswitch-enum, but that would require refactoring switches over all enums, not just over the RelKind enum, and for some enums, that would require a large number of extra lines to be added to the code. Another strategy is to remove the default label from switches over RelKind, but that removes protections against invalid relkinds being encountered.\n\nDo you have a preference about which directions I should pursue? Or do you think the patch idea itself is dead?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 11 Jul 2020 12:14:11 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Jul 10, 2020, at 11:00 PM, Michael Paquier <michael@paquier.xyz> wrote:\n>> I am not really a fan of this patch. I could see a benefit in\n>> switching to an enum so as for places where we use a switch/case\n>> without a default we would be warned if a new relkind gets added or if\n>> a value is not covered, but then we should not really need\n>> RELKIND_NULL, no?\n\n> There are code paths where relkind is sometimes '\\0' under normal, non-exceptional conditions. This happens in\n\n> \tallpaths.c: set_append_rel_size\n> \trewriteHandler.c: view_query_is_auto_updatable\n> \tlockcmds.c: LockViewRecurse_walker\n> \tpg_depend.c: getOwnedSequences_internal\n\n> Doesn't this justify having RELKIND_NULL in the enum?\n\nI'd say no. I think including an intentionally invalid value in such\nan enum is horrid, mainly because it will force a lot of places to cover\nthat value when they shouldn't (or else draw \"enum value not handled in\nswitch\" warnings). The confusion factor about whether it maybe *is*\na valid value is not to be discounted, either.\n\nIf we can't readily get rid of the use of '\\0' in these code paths,\nmaybe trying to convert to an enum isn't going to be a win after all.\n\n> Getting the compiler to warn when a new relkind is added to the\n> enumeration but not handled in a switch is difficult.\n\nWe already have a project policy about how to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jul 2020 15:32:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "On Sat, Jul 11, 2020 at 03:32:55PM -0400, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> There are code paths where relkind is sometimes '\\0' under normal,\n>> non-exceptional conditions. This happens in\n>> \n>> \tallpaths.c: set_append_rel_size\n>> \trewriteHandler.c: view_query_is_auto_updatable\n>> \tlockcmds.c: LockViewRecurse_walker\n>> \tpg_depend.c: getOwnedSequences_internal\n\nThere are more code paths than what's mentioned upthread when it comes\nto relkinds and \\0. For example, I can quickly grep for acl.c that\nrelies on get_rel_relkind() returning \\0 when the relkind cannot be\nfound. And we do that for get_typtype() as well in the syscache.\n\n>> Doesn't this justify having RELKIND_NULL in the enum?\n> \n> I'd say no. I think including an intentionally invalid value in such\n> an enum is horrid, mainly because it will force a lot of places to cover\n> that value when they shouldn't (or else draw \"enum value not handled in\n> switch\" warnings). The confusion factor about whether it maybe *is*\n> a valid value is not to be discounted, either.\n\nI agree here that the situation could be improved because we never\nstore this value in the catalogs. Perhaps there would be a benefit in\nswitching to an enum in the long run, I am not sure. But if we do so,\nRELKIND_NULL should not be around.\n--\nMichael",
"msg_date": "Sun, 12 Jul 2020 20:59:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "> On Jul 12, 2020, at 4:59 AM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sat, Jul 11, 2020 at 03:32:55PM -0400, Tom Lane wrote:\n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> There are code paths where relkind is sometimes '\\0' under normal,\n>>> non-exceptional conditions. This happens in\n>>> \n>>> \tallpaths.c: set_append_rel_size\n>>> \trewriteHandler.c: view_query_is_auto_updatable\n>>> \tlockcmds.c: LockViewRecurse_walker\n>>> \tpg_depend.c: getOwnedSequences_internal\n> \n> There are more code paths than what's mentioned upthread when it comes\n> to relkinds and \\0. For example, I can quickly grep for acl.c that\n> relies on get_rel_relkind() returning \\0 when the relkind cannot be\n> found. And we do that for get_typtype() as well in the syscache.\n\nI was thinking about places in the code that test a relkind variable against a list of values, rather than places that return a relkind to callers, though certainly those two things are related. It's possible that I've missed some places in the code where \\0 might be encountered, but I've added Asserts against unexpected values in v3.\n\nI left get_rel_relkind() as is. There does not seem to be anything wrong with it returning \\0 as long as all callers are prepared to deal with that result.\n\n> \n>>> Doesn't this justify having RELKIND_NULL in the enum?\n>> \n>> I'd say no. I think including an intentionally invalid value in such\n>> an enum is horrid, mainly because it will force a lot of places to cover\n>> that value when they shouldn't (or else draw \"enum value not handled in\n>> switch\" warnings). The confusion factor about whether it maybe *is*\n>> a valid value is not to be discounted, either.\n> \n> I agree here that the situation could be improved because we never\n> store this value in the catalogs. Perhaps there would be a benefit in\n> switching to an enum in the long run, I am not sure. But if we do so,\n> RELKIND_NULL should not be around.\n\nIn the v3 patch, I have removed RELKIND_NULL from the enum, and also removed default: labels from switches over RelKind. The patch is also rebased.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 14 Jul 2020 15:28:27 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "This patch is way too large. Probably in part explained by the fact\nthat you seem to have run pgindent and absorbed a lot of unrelated\nchanges. This makes the patch essentially unreviewable.\n\nI think you should define a RelationGetRelkind() static function that\nreturns the appropriate datatype without requiring a cast and assert in\nevery single place that processes a relation's relkind. Similarly\nyou've chosen to leave get_rel_relkind untouched, but that seems unwise.\n\nI think the chr_ macros are pointless.\n\nReading back the thread, it seems that the whole point of your patch was\nto change the tests that currently use 'if' tests to switch blocks. I\ncannot understand what's the motivation for that, but it appears to me\nthat the approach is backwards: I'd counsel to *first* change the APIs\n(get_rel_relkind and defining an enum, plus adding RelationGetRelkind)\nso that everything is more sensible and safe, including appropriate\nanswers for the places where an \"invalid\" relkind is returned; and once\nthat's in place, replace if tests with switch blocks where it makes\nsense to do so.\n\nAlso, I suggest that this thread is not a good one for this patch.\nSubject is entirely not appropriate. Open a new thread perhaps?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Jul 2020 19:12:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Towards easier AMs: Cleaning up inappropriate use of name\n \"relkind\""
},
{
"msg_contents": "\n\n> On Jul 14, 2020, at 4:12 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> This patch is way too large. Probably in part explained by the fact\n> that you seem to have run pgindent and absorbed a lot of unrelated\n> changes. This makes the patch essentially unreviewable.\n\nI did not run pgindent, but when changing\n\n\tif (relkind == RELKIND_INDEX)\n\t{\n\t\t/* foo */\n\t}\n\nto\n\n\tswitch (relkind)\n\t{\n\t\tcase RELKIND_INDEX:\n\t\t\t/* foo */\n\t}\n\nthe indentation of /* foo */ changes. For large foo, that results in a lot of lines. There are also cases in the code where comparisons of multiple variables are mixed together. To split those out into switch/case statements I had to rearrange some of the code blocks.\n\n> I think you should define a RelationGetRelkind() static function that\n> returns the appropriate datatype without requiring a cast and assert in\n> every single place that processes a relation's relkind. Similarly\n> you've chosen to leave get_rel_relkind untouched, but that seems unwise.\n\nI was well aware of how large the patch had gotten, and didn't want to add more....\n\n> I think the chr_ macros are pointless.\n\nLook more closely at the #define RelKindAsString(x) CppAsString2(chr_##x)\n\n> Reading back the thread, it seems that the whole point of your patch was\n> to change the tests that currently use 'if' tests to switch blocks. I\n> cannot understand what's the motivation for that,\n\nThere might not be sufficient motivation to make the patch worth doing. The motivation was to leverage the project's recent addition of -Wswitch to make it easier to know which code needs updating when you add a new relkind. That doesn't happen very often, but I was working towards that kind of thing, and thought this might be a good prerequisite patch for that work. Stylistically, I also prefer\n\n+ switch ((RelKind) rel->rd_rel->relkind)\n+ {\n+ case RELKIND_RELATION:\n+ case RELKIND_MATVIEW:\n+ case RELKIND_TOASTVALUE:\n\nover \n\n- if (rel->rd_rel->relkind == RELKIND_RELATION ||\n- rel->rd_rel->relkind == RELKIND_MATVIEW ||\n- rel->rd_rel->relkind == RELKIND_TOASTVALUE)\n\nwhich is a somewhat common pattern in the code. It takes less mental effort to see that only one variable is being compared against those three enum values. In some cases, though not necessarily this exact example, it also *might* save duplicated work computing the variable, depending on the situation and what the compiler can optimize away.\n\n> but it appears to me\n> that the approach is backwards: I'd counsel to *first* change the APIs\n> (get_rel_relkind and defining an enum, plus adding RelationGetRelkind)\n> so that everything is more sensible and safe, including appropriate\n> answers for the places where an \"invalid\" relkind is returned;\n\nOk.\n\n> and once\n> that's in place, replace if tests with switch blocks where it makes\n> sense to do so.\n\nOk.\n\n> \n> Also, I suggest that this thread is not a good one for this patch.\n> Subject is entirely not appropriate. Open a new thread perhaps?\n\nI've changed the subject line.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 14 Jul 2020 17:15:50 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Refactoring relkind as an enum"
}
] |
[
{
"msg_contents": "Hello,\n\nI have been working on a node.js streaming client for different COPY\nscenarios.\nusually, during CopyOut, clients tend to buffer network chunks until they\nhave gathered a full copyData message and pass that to the user.\n\nIn some cases, this can lead to very large copyData messages. when there\nare very long text fields or bytea fields it will require a lot of memory\nto be handled (up to 1GB I think in the worst case scenario)\n\nIn COPY TO, I managed to relax that requirement, considering that copyData\nis simply a transparent container. For each network chunk, the relevent\nmessage content is forwarded which makes for 64KB chunks at most.\n\nIf that makes things clearer, here is an example scenarios, with 4 network\nchunks received and the way they are forwarded to the client.\n\nin: CopyData Int32Len Byten1\nin: Byten2\nin: Byten3\nin: CopyData Int32Len Byten4\n\nout: Byten1\nout: Byten2\nout: Byten3\nout: Byten4\n\nWe loose the semantics of the \"row\" that copyData has according to the\ndocumentation\nhttps://www.postgresql.org/docs/10/protocol-flow.html#PROTOCOL-COPY\n>The backend sends a CopyOutResponse message to the frontend, followed by\nzero or more >CopyData messages (**always one per row**), followed by\nCopyDone\n\nbut it is not a problem because the raw bytes are still parsable (rows +\nfields) in text mode (tsv) and in binary mode)\n\nNow I started working on copyBoth and logical decoding scenarios. In this\ncase, the server send series of copyData. 1 copyData containing 1 message :\n\nat the network chunk level, in the case of large fields, we can observe\n\nin: CopyData Int32 XLogData Int64 Int64 Int64 Byten1\nin: Byten2\nin: CopyData Int32 XLogData Int64 Int64 Int64 Byten3\nin: CopyData Int32 XLogData Int64 Int64 Int64 Byten4\n\nout: XLogData Int64 Int64 Int64 Byten1\nout: Byten2\nout: XLogData Int64 Int64 Int64 Byten3\nout: XLogData Int64 Int64 Int64 Byten4\n\nbut at the XLogData level, the protocol is not self-describing its length,\nso there is no real way of knowing where the first XLogData ends apart from\n - knowing the length of the first copyData (4 + 1 + 3*8 + n1 + n2)\n - knowing the internals of the output plugin and benefit from a plugin\nthat self-describe its span\n\nwhen a network chunks contains several copyDatas\nin: CopyData Int32 XLogData Int64 Int64 Int64 Byten1 CopyData Int32\nXLogData Int64 Int64 Int64 Byten2\nwe have\nout: XLogData Int64 Int64 Int64 Byten1 XLogData Int64 Int64 Int64 Byten2\n\nand with test_decoding for example it is impossible to know where the\ntest_decoding output ends without remembering the original length of the\ncopyData.\n\nnow my question is the following :\nis it ok to consider that over the long term copyData is simply a transport\ncontainer that exists only to allow the multiplexing of events in the\nprotocol but that messages inside could be chunked over several copyData\nevents ?\n\nif we put test_decoding apart, do you consider that output plugins XLogData\nshould be self-aware of their length ? I suppose (but did not fully verify\nyet) that this is the case for pgoutput ? I suppose that wal2json could\nalso be parsed by balancing the brackets.\n\nI am wondering because when a client sends copyData to the server, the\ndocumentation says\n>The message boundaries are not required to have anything to do with row\nboundaries, >although that is often a reasonable choice.\n\nI hope that my message will ring a bell on the list.\nI tried the best I could to describe my very specific research.\nThank you for your help,\n---\nJérôme\n\nHello,I have been working on a node.js streaming client for different COPY scenarios.usually, during CopyOut, clients tend to buffer network chunks until they have gathered a full copyData message and pass that to the user.In some cases, this can lead to very large copyData messages. when there are very long text fields or bytea fields it will require a lot of memory to be handled (up to 1GB I think in the worst case scenario)In COPY TO, I managed to relax that requirement, considering that copyData is simply a transparent container. For each network chunk, the relevent message content is forwarded which makes for 64KB chunks at most.If that makes things clearer, here is an example scenarios, with 4 network chunks received and the way they are forwarded to the client.in: CopyData Int32Len Byten1in: Byten2in: Byten3in: CopyData Int32Len Byten4out: Byten1out: Byten2out: Byten3 out: Byten4We loose the semantics of the \"row\" that copyData has according to the documentation https://www.postgresql.org/docs/10/protocol-flow.html#PROTOCOL-COPY >The backend sends a CopyOutResponse message to the frontend, followed by zero or more >CopyData messages (**always one per row**), followed by CopyDonebut it is not a problem because the raw bytes are still parsable (rows + fields) in text mode (tsv) and in binary mode)Now I started working on copyBoth and logical decoding scenarios. In this case, the server send series of copyData. 1 copyData containing 1 message :at the network chunk level, in the case of large fields, we can observein: CopyData Int32\n\nXLogData Int64 \n\nInt64 \n\nInt64 Byten1\n\n\n\n\n\nin: Byten2 in: \n\nCopyData Int32\n\nXLogData Int64 \n\nInt64 \n\nInt64 Byten3\n\n\n\nin: \n\nCopyData Int32\n\nXLogData Int64 \n\nInt64 \n\nInt64 Byten4out: XLogData Int64 \n\nInt64 \n\nInt64 Byten1out: Byten2\n\n\n\nout: XLogData Int64 \n\nInt64 \n\nInt64 Byten3out: XLogData Int64 \n\nInt64 \n\nInt64 Byten4but at the XLogData level, the protocol is not self-describing its length, so there is no real way of knowing where the first XLogData ends apart from - knowing the length of the first copyData (4 + 1 + 3*8 + n1 + n2) - knowing the internals of the output plugin and benefit from a plugin that self-describe its span when a network chunks contains several copyDatasin: CopyData Int32\n\nXLogData Int64 \n\nInt64 \n\nInt64 Byten1 CopyData Int32\n\nXLogData Int64 \n\nInt64 \n\nInt64 Byten2\n\n we haveout: XLogData Int64 \n\nInt64 \n\nInt64 Byten1 XLogData Int64 \n\nInt64 \n\nInt64 Byten2and with test_decoding for example it is impossible to know where the test_decoding output ends without remembering the original length of the copyData.now my question is the following : is it ok to consider that over the long term copyData is simply a transport container that exists only to allow the multiplexing of events in the protocol but that messages inside could be chunked over several copyData events ?if we put test_decoding apart, do you consider that output plugins XLogData should be self-aware of their length ? I suppose (but did not fully verify yet) that this is the case for pgoutput ? I suppose that wal2json could also be parsed by balancing the brackets.I am wondering because when a client sends copyData to the server, the documentation says>The message boundaries are not required to have anything to do with row boundaries, >although that is often a reasonable choice. I hope that my message will ring a bell on the list.I tried the best I could to describe my very specific research.Thank you for your help,---Jérôme",
"msg_date": "Wed, 3 Jun 2020 19:28:12 +0200",
"msg_from": "Jerome Wagner <jerome.wagner@laposte.net>",
"msg_from_op": true,
"msg_subject": "question regarding copyData containers"
},
{
"msg_contents": "Jerome Wagner <jerome.wagner@laposte.net> writes:\n> now my question is the following :\n> is it ok to consider that over the long term copyData is simply a transport\n> container that exists only to allow the multiplexing of events in the\n> protocol but that messages inside could be chunked over several copyData\n> events ?\n\nYes, the expectation is that clients can send CopyData messages that are\nsplit up however they choose; the message boundaries needn't correspond\nto any semantic boundaries in the data stream.\n\nThe rule in the other direction, that a message corresponds to one table\nrow, is something that might not last forever either. As we get more\npeople working with large data values, there's going to be pressure to\nset some smaller limit on message size.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jun 2020 14:25:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: question regarding copyData containers"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-03 19:28:12 +0200, Jerome Wagner wrote:\n> I have been working on a node.js streaming client for different COPY\n> scenarios.\n> usually, during CopyOut, clients tend to buffer network chunks until they\n> have gathered a full copyData message and pass that to the user.\n> \n> In some cases, this can lead to very large copyData messages. when there\n> are very long text fields or bytea fields it will require a lot of memory\n> to be handled (up to 1GB I think in the worst case scenario)\n> \n> In COPY TO, I managed to relax that requirement, considering that copyData\n> is simply a transparent container. For each network chunk, the relevent\n> message content is forwarded which makes for 64KB chunks at most.\n\nUhm.\n\n\n> We loose the semantics of the \"row\" that copyData has according to the\n> documentation\n> https://www.postgresql.org/docs/10/protocol-flow.html#PROTOCOL-COPY\n> >The backend sends a CopyOutResponse message to the frontend, followed by\n> zero or more >CopyData messages (**always one per row**), followed by\n> CopyDone\n> \n> but it is not a problem because the raw bytes are still parsable (rows +\n> fields) in text mode (tsv) and in binary mode)\n\nThis seems like an extremely bad idea to me. Are we really going to ask\nclients to incur the overhead (both in complexity and runtime) to parse\nincoming data just to detect row boundaries? Given the number of\noptions there are for COPY, that's a seriously complicated task.\n\nI think that's a completely no-go.\n\n\nLeaving error handling aside (see para below), what does this actually\nget you? Either your client cares about getting a row in one sequential\nchunk, or it doesn't. If it doesn't care, then there's no need to\nallocate a buffer that can contain the whole 'd' message. You can just\nhand the clients the chunks incrementally. If it does, then you need to\nreassemble either way (or worse, you force to reimplement the client to\nreimplement that).\n\nI assume what you're trying to get at is being able to send CopyData\nmessages before an entire row is assembled? And you want to send\nseparate CopyData messages to allow for error handling? I think that's\na quite worthwhile goal, but I don't think it can sensibly solved by\njust removing protocol level framing of row boundaries. And that will\nmean evolving the protocol in a non-compatible way.\n\n\n> Now I started working on copyBoth and logical decoding scenarios. In this\n> case, the server send series of copyData. 1 copyData containing 1 message :\n> \n> at the network chunk level, in the case of large fields, we can observe\n> \n> in: CopyData Int32 XLogData Int64 Int64 Int64 Byten1\n> in: Byten2\n> in: CopyData Int32 XLogData Int64 Int64 Int64 Byten3\n> in: CopyData Int32 XLogData Int64 Int64 Int64 Byten4\n> \n> out: XLogData Int64 Int64 Int64 Byten1\n> out: Byten2\n> out: XLogData Int64 Int64 Int64 Byten3\n> out: XLogData Int64 Int64 Int64 Byten4\n\n> but at the XLogData level, the protocol is not self-describing its length,\n\n> so there is no real way of knowing where the first XLogData ends apart from\n> - knowing the length of the first copyData (4 + 1 + 3*8 + n1 + n2)\n> - knowing the internals of the output plugin and benefit from a plugin\n> that self-describe its span\n> when a network chunks contains several copyDatas\n> in: CopyData Int32 XLogData Int64 Int64 Int64 Byten1 CopyData Int32\n> XLogData Int64 Int64 Int64 Byten2\n> we have\n> out: XLogData Int64 Int64 Int64 Byten1 XLogData Int64 Int64 Int64 Byten2\n\nRight now all 'w' messages should be contained in one CopyData/'d' that\ndoesn't contain anything but the XLogData/'w'.\n\nDo you just mean that if we'd change the server side code to split 'w'\nmessages across multiple 'd' messages, then we couldn't make much sense\nof the data anymore? If so, then I don't really see a problem. Unless\nyou do a much larger change, what'd be the point in allowing to split\n'w' across multiple 'd' chunks? The input data exists in a linear\nbuffer already, so you're not going to reduce peak memory usage by\nsending smaller CopyData chunks.\n\nSure, we could evolve the logical decoding interface to output to be\nable to send data in a much more incremental way than, typically,\nper-row basis. But I think that'd quite substantially increase\ncomplexity. And the message framing seems to be the easier part of such\na change.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Jun 2020 15:08:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: question regarding copyData containers"
},
{
"msg_contents": "Hello,\n\nthank you for your feedback.\n\nI agree that modifying the COPY subprotocols is hard to do because it would\nhave an impact on the client ecosystem.\n\nMy understanding (which seems to be confirmed by what Tom Lane said) is\nthat the server discards the framing and\nmanages to make sense of the underlying data.\n\n> the expectation is that clients can send CopyData messages that are\n> split up however they choose; the message boundaries needn't correspond\n> to any semantic boundaries in the data stream.\n\nSo I thought that a client could decide to have the same behavior and could\nstart parsing the payload of a copyData message without assembling it first.\nIt works perfectly with COPY TO but I hit a roadblock on copyBoth during\nlogical replication with test_decoding because the subprotocol doesn't have\nany framing.\n\n> Right now all 'w' messages should be contained in one CopyData/'d' that\n> doesn't contain anything but the XLogData/'w'.\n\nThe current format of the XLogData/'w' message is\nw lsn lsn time byten\n\nand even if it is maybe too late now I was wondering why it was not decided\nto be\nw lsn lsn time n byten\n\nbecause it seems to me that the missing n ties the XLogData to the copyData\nframing.\n\n>The input data exists in a linear\n>buffer already, so you're not going to reduce peak memory usage by\n>sending smaller CopyData chunks.\n\nThat is very surprising to me. Do you mean that on the server in COPY TO\nmode, a full row is prepared in a linear buffer in memory before\nbeeing sent as a copyData/d'\nI found the code around\nhttps://github.com/postgres/postgres/blob/master/src/backend/commands/copy.c#L2153\nand\nindeed the whole row seems to be buffered in memory.\n\nGood thing or bad thing, users tend to use bigger fields (text, jsonb,\nbytea) and that can be very memory hungry.\nDo you know a case in postgres (other than large_objects I suppose) where\nthe server can flush data from a field without buffering it in memory ?\n\nAnd then as you noted, there is the multiplexing of events. a very long\ncopyData makes the communication impossible between the client and the\nserver during the transfer.\n\nI briefly looked at\nhttps://github.com/postgres/postgres/blob/master/src/backend/replication/walsender.c\nand\nI found\n\n/*\n* Maximum data payload in a WAL data message. Must be >= XLOG_BLCKSZ.\n*\n* We don't have a good idea of what a good value would be; there's some\n* overhead per message in both walsender and walreceiver, but on the other\n* hand sending large batches makes walsender less responsive to signals\n* because signals are checked only between messages. 128kB (with\n* default 8k blocks) seems like a reasonable guess for now.\n*/\n#define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)\nso I thought that the maximum copyData/d' I would receive during logical\nreplication was MAX_SEND_SIZE but it seems that this is not used for\nlogical decoding.\nthe whole output of the output plugin seem to be prepared in memory so for\nan insert like\n\ninsert into mytable (col) values (repeat('-', pow(2, 27)::int)\n\na 128MB linear buffer will be created on the server and sent as 1 copyData\nover many network chunks.\n\nSo I understand that in the long term copyData framing should not carry any\nsemantic to be able to keep messages small enough to allow multiplexing but\nthat there are many steps to climb before that.\n\nWould it make sense one day in some way to try and do streaming at the\nsub-field level ? I guess that is a huge undertaking since most of the\nfield unit interfaces are probably based on a buffer/field one-to-one\nmapping.\n\nGreetings,\nJérôme\n\n\n\nOn Thu, Jun 4, 2020 at 12:08 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-06-03 19:28:12 +0200, Jerome Wagner wrote:\n> > I have been working on a node.js streaming client for different COPY\n> > scenarios.\n> > usually, during CopyOut, clients tend to buffer network chunks until they\n> > have gathered a full copyData message and pass that to the user.\n> >\n> > In some cases, this can lead to very large copyData messages. when there\n> > are very long text fields or bytea fields it will require a lot of memory\n> > to be handled (up to 1GB I think in the worst case scenario)\n> >\n> > In COPY TO, I managed to relax that requirement, considering that\n> copyData\n> > is simply a transparent container. For each network chunk, the relevent\n> > message content is forwarded which makes for 64KB chunks at most.\n>\n> Uhm.\n>\n>\n> > We loose the semantics of the \"row\" that copyData has according to the\n> > documentation\n> > https://www.postgresql.org/docs/10/protocol-flow.html#PROTOCOL-COPY\n> > >The backend sends a CopyOutResponse message to the frontend, followed by\n> > zero or more >CopyData messages (**always one per row**), followed by\n> > CopyDone\n> >\n> > but it is not a problem because the raw bytes are still parsable (rows +\n> > fields) in text mode (tsv) and in binary mode)\n>\n> This seems like an extremely bad idea to me. Are we really going to ask\n> clients to incur the overhead (both in complexity and runtime) to parse\n> incoming data just to detect row boundaries? Given the number of\n> options there are for COPY, that's a seriously complicated task.\n>\n> I think that's a completely no-go.\n>\n>\n> Leaving error handling aside (see para below), what does this actually\n> get you? Either your client cares about getting a row in one sequential\n> chunk, or it doesn't. If it doesn't care, then there's no need to\n> allocate a buffer that can contain the whole 'd' message. You can just\n> hand the clients the chunks incrementally. If it does, then you need to\n> reassemble either way (or worse, you force to reimplement the client to\n> reimplement that).\n>\n> I assume what you're trying to get at is being able to send CopyData\n> messages before an entire row is assembled? And you want to send\n> separate CopyData messages to allow for error handling? I think that's\n> a quite worthwhile goal, but I don't think it can sensibly solved by\n> just removing protocol level framing of row boundaries. And that will\n> mean evolving the protocol in a non-compatible way.\n>\n>\n> > Now I started working on copyBoth and logical decoding scenarios. In this\n> > case, the server send series of copyData. 1 copyData containing 1\n> message :\n> >\n> > at the network chunk level, in the case of large fields, we can observe\n> >\n> > in: CopyData Int32 XLogData Int64 Int64 Int64 Byten1\n> > in: Byten2\n> > in: CopyData Int32 XLogData Int64 Int64 Int64 Byten3\n> > in: CopyData Int32 XLogData Int64 Int64 Int64 Byten4\n> >\n> > out: XLogData Int64 Int64 Int64 Byten1\n> > out: Byten2\n> > out: XLogData Int64 Int64 Int64 Byten3\n> > out: XLogData Int64 Int64 Int64 Byten4\n>\n> > but at the XLogData level, the protocol is not self-describing its\n> length,\n>\n> > so there is no real way of knowing where the first XLogData ends apart\n> from\n> > - knowing the length of the first copyData (4 + 1 + 3*8 + n1 + n2)\n> > - knowing the internals of the output plugin and benefit from a plugin\n> > that self-describe its span\n> > when a network chunks contains several copyDatas\n> > in: CopyData Int32 XLogData Int64 Int64 Int64 Byten1 CopyData Int32\n> > XLogData Int64 Int64 Int64 Byten2\n> > we have\n> > out: XLogData Int64 Int64 Int64 Byten1 XLogData Int64 Int64 Int64\n> Byten2\n>\n> Right now all 'w' messages should be contained in one CopyData/'d' that\n> doesn't contain anything but the XLogData/'w'.\n>\n> Do you just mean that if we'd change the server side code to split 'w'\n> messages across multiple 'd' messages, then we couldn't make much sense\n> of the data anymore? If so, then I don't really see a problem. Unless\n> you do a much larger change, what'd be the point in allowing to split\n> 'w' across multiple 'd' chunks? The input data exists in a linear\n> buffer already, so you're not going to reduce peak memory usage by\n> sending smaller CopyData chunks.\n>\n> Sure, we could evolve the logical decoding interface to output to be\n> able to send data in a much more incremental way than, typically,\n> per-row basis. But I think that'd quite substantially increase\n> complexity. And the message framing seems to be the easier part of such\n> a change.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHello,thank you for your feedback.I agree that modifying the COPY subprotocols is hard to do because it would have an impact on the client ecosystem.My understanding (which seems to be confirmed by what Tom Lane said) is that the server discards the framing andmanages to make sense of the underlying data.> the expectation is that clients can send CopyData messages that are> split up however they choose; the message boundaries needn't correspond> to any semantic boundaries in the data stream.So I thought that a client could decide to have the same behavior and could start parsing the payload of a copyData message without assembling it first.It works perfectly with COPY TO but I hit a roadblock on copyBoth during logical replication with test_decoding because the subprotocol doesn't have any framing.> Right now all 'w' messages should be contained in one CopyData/'d' that> doesn't contain anything but the XLogData/'w'. The current format of the XLogData/'w' message isw lsn lsn time bytenand even if it is maybe too late now I was wondering why it was not decided to bew lsn lsn time n bytenbecause it seems to me that the missing n ties the XLogData to the copyData framing.>The input data exists in a linear>buffer already, so you're not going to reduce peak memory usage by>sending smaller CopyData chunks. That is very surprising to me. Do you mean that on the server in COPY TO mode, a full row is prepared in a linear buffer in memory before beeing sent as a copyData/d'I found the code around https://github.com/postgres/postgres/blob/master/src/backend/commands/copy.c#L2153 and indeed the whole row seems to be buffered in memory.Good thing or bad thing, users tend to use bigger fields (text, jsonb, bytea) and that can be very memory hungry.Do you know a case in postgres (other than large_objects I suppose) where the server can flush data from a field without buffering it in memory ?And then as you noted, there is the multiplexing of events. a very long copyData makes the communication impossible between the client and the server during the transfer.I briefly looked at https://github.com/postgres/postgres/blob/master/src/backend/replication/walsender.c and I found/* * Maximum data payload in a WAL data message. Must be >= XLOG_BLCKSZ. * * We don't have a good idea of what a good value would be; there's some * overhead per message in both walsender and walreceiver, but on the other * hand sending large batches makes walsender less responsive to signals * because signals are checked only between messages. 128kB (with * default 8k blocks) seems like a reasonable guess for now. */#define MAX_SEND_SIZE (XLOG_BLCKSZ * 16)so I thought that the maximum copyData/d' I would receive during logical replication was MAX_SEND_SIZE but it seems that this is not used for logical decoding.the whole output of the output plugin seem to be prepared in memory so for an insert likeinsert into mytable (col) values (repeat('-', pow(2, 27)::int)a 128MB linear buffer will be created on the server and sent as 1 copyData over many network chunks.So I understand that in the long term copyData framing should not carry any semantic to be able to keep messages small enough to allow multiplexing but that there are many steps to climb before that.Would it make sense one day in some way to try and do streaming at the sub-field level ? I guess that is a huge undertaking since most of the field unit interfaces are probably based on a buffer/field one-to-one mapping.Greetings,JérômeOn Thu, Jun 4, 2020 at 12:08 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-06-03 19:28:12 +0200, Jerome Wagner wrote:\n> I have been working on a node.js streaming client for different COPY\n> scenarios.\n> usually, during CopyOut, clients tend to buffer network chunks until they\n> have gathered a full copyData message and pass that to the user.\n> \n> In some cases, this can lead to very large copyData messages. when there\n> are very long text fields or bytea fields it will require a lot of memory\n> to be handled (up to 1GB I think in the worst case scenario)\n> \n> In COPY TO, I managed to relax that requirement, considering that copyData\n> is simply a transparent container. For each network chunk, the relevent\n> message content is forwarded which makes for 64KB chunks at most.\n\nUhm.\n\n\n> We loose the semantics of the \"row\" that copyData has according to the\n> documentation\n> https://www.postgresql.org/docs/10/protocol-flow.html#PROTOCOL-COPY\n> >The backend sends a CopyOutResponse message to the frontend, followed by\n> zero or more >CopyData messages (**always one per row**), followed by\n> CopyDone\n> \n> but it is not a problem because the raw bytes are still parsable (rows +\n> fields) in text mode (tsv) and in binary mode)\n\nThis seems like an extremely bad idea to me. Are we really going to ask\nclients to incur the overhead (both in complexity and runtime) to parse\nincoming data just to detect row boundaries? Given the number of\noptions there are for COPY, that's a seriously complicated task.\n\nI think that's a completely no-go.\n\n\nLeaving error handling aside (see para below), what does this actually\nget you? Either your client cares about getting a row in one sequential\nchunk, or it doesn't. If it doesn't care, then there's no need to\nallocate a buffer that can contain the whole 'd' message. You can just\nhand the clients the chunks incrementally. If it does, then you need to\nreassemble either way (or worse, you force to reimplement the client to\nreimplement that).\n\nI assume what you're trying to get at is being able to send CopyData\nmessages before an entire row is assembled? And you want to send\nseparate CopyData messages to allow for error handling? I think that's\na quite worthwhile goal, but I don't think it can sensibly solved by\njust removing protocol level framing of row boundaries. And that will\nmean evolving the protocol in a non-compatible way.\n\n\n> Now I started working on copyBoth and logical decoding scenarios. In this\n> case, the server send series of copyData. 1 copyData containing 1 message :\n> \n> at the network chunk level, in the case of large fields, we can observe\n> \n> in: CopyData Int32 XLogData Int64 Int64 Int64 Byten1\n> in: Byten2\n> in: CopyData Int32 XLogData Int64 Int64 Int64 Byten3\n> in: CopyData Int32 XLogData Int64 Int64 Int64 Byten4\n> \n> out: XLogData Int64 Int64 Int64 Byten1\n> out: Byten2\n> out: XLogData Int64 Int64 Int64 Byten3\n> out: XLogData Int64 Int64 Int64 Byten4\n\n> but at the XLogData level, the protocol is not self-describing its length,\n\n> so there is no real way of knowing where the first XLogData ends apart from\n> - knowing the length of the first copyData (4 + 1 + 3*8 + n1 + n2)\n> - knowing the internals of the output plugin and benefit from a plugin\n> that self-describe its span\n> when a network chunks contains several copyDatas\n> in: CopyData Int32 XLogData Int64 Int64 Int64 Byten1 CopyData Int32\n> XLogData Int64 Int64 Int64 Byten2\n> we have\n> out: XLogData Int64 Int64 Int64 Byten1 XLogData Int64 Int64 Int64 Byten2\n\nRight now all 'w' messages should be contained in one CopyData/'d' that\ndoesn't contain anything but the XLogData/'w'.\n\nDo you just mean that if we'd change the server side code to split 'w'\nmessages across multiple 'd' messages, then we couldn't make much sense\nof the data anymore? If so, then I don't really see a problem. Unless\nyou do a much larger change, what'd be the point in allowing to split\n'w' across multiple 'd' chunks? The input data exists in a linear\nbuffer already, so you're not going to reduce peak memory usage by\nsending smaller CopyData chunks.\n\nSure, we could evolve the logical decoding interface to output to be\nable to send data in a much more incremental way than, typically,\nper-row basis. But I think that'd quite substantially increase\ncomplexity. And the message framing seems to be the easier part of such\na change.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 4 Jun 2020 11:37:54 +0200",
"msg_from": "Jerome Wagner <jerome.wagner@laposte.net>",
"msg_from_op": true,
"msg_subject": "Re: question regarding copyData containers"
}
] |
[
{
"msg_contents": "In connection with the nearby thread about spinlock coding rule\nviolations, I noticed that we have several places that are doing\nthings like this:\n\n\tSpinLockAcquire(&WalRcv->mutex);\n\t...\n\twritten_lsn = pg_atomic_read_u64(&WalRcv->writtenUpto);\n\t...\n\tSpinLockRelease(&WalRcv->mutex);\n\nThat's from pg_stat_get_wal_receiver(); there is similar code in\nfreelist.c's ClockSweepTick() and StrategySyncStart().\n\nThis seems to me to be very bad code. In the first place, on machines\nwithout the appropriate type of atomic operation, atomics.c is going\nto be using a spinlock to emulate atomicity, which means this code\ntries to take a spinlock while holding another one. That's not okay,\neither from the standpoint of performance or error-safety. In the\nsecond place, this coding seems to me to indicate serious confusion\nabout which lock is protecting what. In the above example, if\nwrittenUpto is only accessed through atomic operations then it seems\nlike we could just move the pg_atomic_read_u64 out of the spinlock\nsection; or if the spinlock is adequate protection then we could just\ndo a normal fetch. If we actually need both locks then this needs\nsignificant re-thinking, IMO.\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jun 2020 14:19:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-03 14:19:45 -0400, Tom Lane wrote:\n> In connection with the nearby thread about spinlock coding rule\n> violations, I noticed that we have several places that are doing\n> things like this:\n>\n> \tSpinLockAcquire(&WalRcv->mutex);\n> \t...\n> \twritten_lsn = pg_atomic_read_u64(&WalRcv->writtenUpto);\n> \t...\n> \tSpinLockRelease(&WalRcv->mutex);\n>\n> That's from pg_stat_get_wal_receiver(); there is similar code in\n> freelist.c's ClockSweepTick() and StrategySyncStart().\n>\n> This seems to me to be very bad code. In the first place, on machines\n> without the appropriate type of atomic operation, atomics.c is going\n> to be using a spinlock to emulate atomicity, which means this code\n> tries to take a spinlock while holding another one. That's not okay,\n> either from the standpoint of performance or error-safety.\n\nI'm honestly not particularly concerned about either of those in\ngeneral:\n\n- WRT performance: Which platforms where we care about performance can't\n do a tear-free read of a 64bit integer, and thus needs a spinlock for\n this? And while the cases in freelist.c aren't just reads, they are\n really cold paths (clock wrapping around).\n- WRT error safety: What could happen here? The atomics codepaths is\n no-fail code? And nothing should ever nest inside the atomic-emulation\n spinlocks. What am I missing?\n\nI think straight out prohibiting use of atomics inside a spinlock will\nlead to more complicated and slower code, rather than than improving\nanything. I'm to blame for the ClockSweepTick() case, and I don't really\nsee how we could avoid the atomic-while-spinlocked without relying on\n64bit atomics, which are emulated on more platforms, and without having\na more complicated retry loop.\n\n\n> In the second place, this coding seems to me to indicate serious\n> confusion about which lock is protecting what. In the above example,\n> if writtenUpto is only accessed through atomic operations then it\n> seems like we could just move the pg_atomic_read_u64 out of the\n> spinlock section; or if the spinlock is adequate protection then we\n> could just do a normal fetch. If we actually need both locks then\n> this needs significant re-thinking, IMO.\n\nYea, it doesn't seem necessary at all that writtenUpto is read with the\nspinlock held. That's very recent:\n\ncommit 2c8dd05d6cbc86b7ad21cfd7010e041bb4c3950b\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: 2020-05-17 09:22:07 +0900\n\n Make pg_stat_wal_receiver consistent with the WAL receiver's shmem info\n\nand I assume just was caused by mechanical replacement, rather than\nintentionally doing so while holding the spinlock. As far as I can tell\nnone of the other writtenUpto accesses are under the spinlock.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Jun 2020 13:45:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "On Thu, Jun 4, 2020 at 8:45 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-06-03 14:19:45 -0400, Tom Lane wrote:\n> > In the second place, this coding seems to me to indicate serious\n> > confusion about which lock is protecting what. In the above example,\n> > if writtenUpto is only accessed through atomic operations then it\n> > seems like we could just move the pg_atomic_read_u64 out of the\n> > spinlock section; or if the spinlock is adequate protection then we\n> > could just do a normal fetch. If we actually need both locks then\n> > this needs significant re-thinking, IMO.\n>\n> Yea, it doesn't seem necessary at all that writtenUpto is read with the\n> spinlock held. That's very recent:\n>\n> commit 2c8dd05d6cbc86b7ad21cfd7010e041bb4c3950b\n> Author: Michael Paquier <michael@paquier.xyz>\n> Date: 2020-05-17 09:22:07 +0900\n>\n> Make pg_stat_wal_receiver consistent with the WAL receiver's shmem info\n>\n> and I assume just was caused by mechanical replacement, rather than\n> intentionally doing so while holding the spinlock. As far as I can tell\n> none of the other writtenUpto accesses are under the spinlock.\n\nYeah. It'd be fine to move that after the spinlock release. Although\nit's really just for informational purposes only, not for any data\nintegrity purpose, reading it before the spinlock acquisition would\ntheoretically allow it to appear to be (reportedly) behind\nflushedUpto, which would be silly.\n\n\n",
"msg_date": "Thu, 4 Jun 2020 09:40:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "On Thu, Jun 04, 2020 at 09:40:31AM +1200, Thomas Munro wrote:\n> Yeah. It'd be fine to move that after the spinlock release. Although\n> it's really just for informational purposes only, not for any data\n> integrity purpose, reading it before the spinlock acquisition would\n> theoretically allow it to appear to be (reportedly) behind\n> flushedUpto, which would be silly.\n\nIndeed. This could just be done after the spinlock section. Sorry\nabout that.\n--\nMichael",
"msg_date": "Thu, 4 Jun 2020 16:03:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-03 14:19:45 -0400, Tom Lane wrote:\n>> This seems to me to be very bad code.\n\n> I think straight out prohibiting use of atomics inside a spinlock will\n> lead to more complicated and slower code, rather than than improving\n> anything. I'm to blame for the ClockSweepTick() case, and I don't really\n> see how we could avoid the atomic-while-spinlocked without relying on\n> 64bit atomics, which are emulated on more platforms, and without having\n> a more complicated retry loop.\n\nWell, if you don't want to touch freelist.c then there is no point\nworrying about what pg_stat_get_wal_receiver is doing. But having\nnow studied that freelist.c code, I don't like it one bit. It's\noverly complicated and not very accurately commented, making it\n*really* hard to convince oneself that it's not buggy.\n\nI think a good case could be made for ripping out what's there now\nand just redefining nextVictimBuffer as a pg_atomic_uint64 that we\nnever reset (ie, make its comment actually true). Then ClockSweepTick\nreduces to\n\npg_atomic_fetch_add_u64(&StrategyControl->nextVictimBuffer, 1) % NBuffers\n\nand when we want to know how many cycles have been completed, we just\ndivide the counter by NBuffers. Now admittedly, this is relying on both\npg_atomic_fetch_add_u64 being fast and int64 division being fast ... but\nto throw your own argument back at you, do we really care anymore about\nperformance on machines where those things aren't true? Furthermore,\nsince all this is happening in a code path that's going to lead to I/O,\nI'm not exactly convinced that a few nanoseconds matter anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jun 2020 13:57:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "I wrote:\n> I think a good case could be made for ripping out what's there now\n> and just redefining nextVictimBuffer as a pg_atomic_uint64 that we\n> never reset (ie, make its comment actually true). Then ClockSweepTick\n> reduces to\n> pg_atomic_fetch_add_u64(&StrategyControl->nextVictimBuffer, 1) % NBuffers\n> and when we want to know how many cycles have been completed, we just\n> divide the counter by NBuffers.\n\nActually ... we could probably use this design with a uint32 counter\nas well, on machines where the 64-bit operations would be slow.\nIn that case, integer overflow of nextVictimBuffer would happen from\ntime to time, resulting in\n\n1. The next actual victim buffer index would jump strangely. This\ndoesn't seem like it'd matter at all, as long as it was infrequent.\n\n2. The computed completePasses value would go backwards. I bet\nthat wouldn't matter too much either, or at least we could teach\nBgBufferSync to cope. (I notice the comments therein suggest that\nit is already designed to cope with completePasses wrapping around,\nso maybe nothing needs to be done.)\n\nIf NBuffers was large enough to be a significant fraction of UINT_MAX,\nthen these corner cases would happen often enough to perhaps be\nproblematic. But I seriously doubt that'd be possible on hardware\nthat wasn't capable of using the 64-bit code path.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jun 2020 14:50:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-04 13:57:19 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-06-03 14:19:45 -0400, Tom Lane wrote:\n> >> This seems to me to be very bad code.\n>\n> > I think straight out prohibiting use of atomics inside a spinlock will\n> > lead to more complicated and slower code, rather than than improving\n> > anything. I'm to blame for the ClockSweepTick() case, and I don't really\n> > see how we could avoid the atomic-while-spinlocked without relying on\n> > 64bit atomics, which are emulated on more platforms, and without having\n> > a more complicated retry loop.\n>\n> Well, if you don't want to touch freelist.c then there is no point\n> worrying about what pg_stat_get_wal_receiver is doing. But having\n> now studied that freelist.c code, I don't like it one bit. It's\n> overly complicated and not very accurately commented, making it\n> *really* hard to convince oneself that it's not buggy.\n>\n> I think a good case could be made for ripping out what's there now\n> and just redefining nextVictimBuffer as a pg_atomic_uint64 that we\n> never reset (ie, make its comment actually true). Then ClockSweepTick\n> reduces to\n\nNote that we can't do that in the older back branches, there wasn't any\n64bit atomics fallback before\n\ncommit e8fdbd58fe564a29977f4331cd26f9697d76fc40\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2017-04-07 14:44:47 -0700\n\n Improve 64bit atomics support.\n\n\n> pg_atomic_fetch_add_u64(&StrategyControl->nextVictimBuffer, 1) % NBuffers\n>\n> and when we want to know how many cycles have been completed, we just\n> divide the counter by NBuffers. Now admittedly, this is relying on both\n> pg_atomic_fetch_add_u64 being fast and int64 division being fast ... but\n> to throw your own argument back at you, do we really care anymore about\n> performance on machines where those things aren't true? Furthermore,\n> since all this is happening in a code path that's going to lead to I/O,\n> I'm not exactly convinced that a few nanoseconds matter anyway.\n\nIt's very easy to observe this code being a bottleneck. If we only\nperformed a single clock tick before IO, sure, then the cost would\nobviously be swamped by the IO cost. But it's pretty common to end up\nhaving to do that ~ NBuffers * 5 times for a single buffer.\n\nI don't think it's realistic to rely on 64bit integer division being\nfast in this path. The latency is pretty darn significant (64bit div is\n35-88 cycles on skylake-x, 64bit idiv 42-95). And unless I\nmisunderstand, you'd have to do so (for % NBuffers) every single clock\ntick, not just when we wrap around.\n\nWe could however avoid the spinlock if we were to use 64bit atomics, by\nstoring the number of passes in the upper 32bit, and the next victim\nbuffer in the lower. But that doesn't seem that simple either, and will\nregress performance on 32bit platforms.\n\n\nI don't the whole strategy logic at all, so I guess there's some\nargument to improve things from that end. It's probably possible to\navoid the lock with 32bit atomics as well.\n\n\nI'd still like to know which problem we're actually trying to solve\nhere. I don't understand the \"error\" issues you mentioned upthread.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Jun 2020 11:55:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-04 14:50:40 -0400, Tom Lane wrote:\n> Actually ... we could probably use this design with a uint32 counter\n> as well, on machines where the 64-bit operations would be slow.\n\nOn skylake-x even a 32bit [i]div is still 26 cycles. That's more than an\natomic operation 18 cycles.\n\n\n> 2. The computed completePasses value would go backwards. I bet\n> that wouldn't matter too much either, or at least we could teach\n> BgBufferSync to cope. (I notice the comments therein suggest that\n> it is already designed to cope with completePasses wrapping around,\n> so maybe nothing needs to be done.)\n\nIf we're not concerned about that, then we can remove the\natomic-inside-spinlock, I think. The only reason for that right now is\nto avoid assuming a wrong pass number.\n\nI don't think completePasses wrapping around is comparable in frequency\nto wrapping around nextVictimBuffer. It's not really worth worrying\nabout bgwriter wrongly assuming it lapped the clock sweep once ever\nUINT32_MAX * NBuffers ticks, but there being a risk every NBuffers seems\nworth worrying about.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Jun 2020 12:06:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'd still like to know which problem we're actually trying to solve\n> here. I don't understand the \"error\" issues you mentioned upthread.\n\nIf you error out of getting the inner spinlock, the outer spinlock\nis stuck, permanently, because there is no mechanism for spinlock\nrelease during transaction abort. Admittedly it's not very likely\nfor the inner acquisition to fail, but it's possible. Aside from\ntimeout scenarios (e.g., process holding lock gets swapped out to\nTimbuktu), it could be that both spinlocks are mapped onto a single\nimplementation lock by spin.c, which notes\n\n * We map all spinlocks onto a set of NUM_SPINLOCK_SEMAPHORES semaphores.\n * It's okay to map multiple spinlocks onto one semaphore because no process\n * should ever hold more than one at a time.\n\nYou've falsified that argument ... and no, I don't want to upgrade\nthe spinlock infrastructure enough to make this OK. We shouldn't\never be holding spinlocks long enough, or doing anything complicated\nenough inside them, for such an upgrade to have merit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jun 2020 15:07:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-04 14:50:40 -0400, Tom Lane wrote:\n>> 2. The computed completePasses value would go backwards. I bet\n>> that wouldn't matter too much either, or at least we could teach\n>> BgBufferSync to cope. (I notice the comments therein suggest that\n>> it is already designed to cope with completePasses wrapping around,\n>> so maybe nothing needs to be done.)\n\n> If we're not concerned about that, then we can remove the\n> atomic-inside-spinlock, I think. The only reason for that right now is\n> to avoid assuming a wrong pass number.\n\nHmm. That might be a less-invasive path to a solution. I can take\na look, if you don't want to.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jun 2020 15:13:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-04 15:07:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'd still like to know which problem we're actually trying to solve\n> > here. I don't understand the \"error\" issues you mentioned upthread.\n>\n> If you error out of getting the inner spinlock, the outer spinlock\n> is stuck, permanently, because there is no mechanism for spinlock\n> release during transaction abort. Admittedly it's not very likely\n> for the inner acquisition to fail, but it's possible.\n\nWe PANIC on stuck spinlocks, so I don't think that's a problem.\n\n\n> * We map all spinlocks onto a set of NUM_SPINLOCK_SEMAPHORES semaphores.\n> * It's okay to map multiple spinlocks onto one semaphore because no process\n> * should ever hold more than one at a time.\n>\n> You've falsified that argument ... and no, I don't want to upgrade\n> the spinlock infrastructure enough to make this OK.\n\nWell, theoretically we take care to avoid this problem. That's why we\nhave a separate define for spinlocks and atomics:\n\n/*\n * When we don't have native spinlocks, we use semaphores to simulate them.\n * Decreasing this value reduces consumption of OS resources; increasing it\n * may improve performance, but supplying a real spinlock implementation is\n * probably far better.\n */\n#define NUM_SPINLOCK_SEMAPHORES\t\t128\n\n/*\n * When we have neither spinlocks nor atomic operations support we're\n * implementing atomic operations on top of spinlock on top of semaphores. To\n * be safe against atomic operations while holding a spinlock separate\n * semaphores have to be used.\n */\n#define NUM_ATOMICS_SEMAPHORES\t\t64\n\nand\n\n#ifndef HAVE_SPINLOCKS\n\n\t/*\n\t * NB: If we're using semaphore based TAS emulation, be careful to use a\n\t * separate set of semaphores. Otherwise we'd get in trouble if an atomic\n\t * var would be manipulated while spinlock is held.\n\t */\n\ts_init_lock_sema((slock_t *) &ptr->sema, true);\n#else\n\tSpinLockInit((slock_t *) &ptr->sema);\n#endif\n\nBut it looks like that code is currently buggy (and looks like it always\nhas been), because we don't look at the nested argument when\ninitializing the semaphore. So we currently allocate too many\nsemaphores, without benefiting from them :(.\n\n\n> We shouldn't ever be holding spinlocks long enough, or doing anything\n> complicated enough inside them, for such an upgrade to have merit.\n\nWell, I don't think atomic instructions are that complicated. And I\nthink prohibiting atomics-within-spinlock adds a problematic\nrestriction, without much in the way of benefits:\n\nThere's plenty things where it's somewhat easy to make the fast-path\nlock-free, but the slow path still needs a lock (e.g. around a\nfreelist). And for those it's really useful to still be able to have a\ncoherent update to an atomic variable, to synchronize with the fast-path\nthat doesn't take the spinlock.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Jun 2020 19:33:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-04 15:13:29 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-06-04 14:50:40 -0400, Tom Lane wrote:\n> >> 2. The computed completePasses value would go backwards. I bet\n> >> that wouldn't matter too much either, or at least we could teach\n> >> BgBufferSync to cope. (I notice the comments therein suggest that\n> >> it is already designed to cope with completePasses wrapping around,\n> >> so maybe nothing needs to be done.)\n>\n> > If we're not concerned about that, then we can remove the\n> > atomic-inside-spinlock, I think. The only reason for that right now is\n> > to avoid assuming a wrong pass number.\n>\n> Hmm. That might be a less-invasive path to a solution. I can take\n> a look, if you don't want to.\n\nFirst, I think it would be problematic:\n\n\t/*\n\t * Find out where the freelist clock sweep currently is, and how many\n\t * buffer allocations have happened since our last call.\n\t */\n\tstrategy_buf_id = StrategySyncStart(&strategy_passes, &recent_alloc);\n...\n\n\t/*\n\t * Compute strategy_delta = how many buffers have been scanned by the\n\t * clock sweep since last time. If first time through, assume none. Then\n\t * see if we are still ahead of the clock sweep, and if so, how many\n\t * buffers we could scan before we'd catch up with it and \"lap\" it. Note:\n\t * weird-looking coding of xxx_passes comparisons are to avoid bogus\n\t * behavior when the passes counts wrap around.\n\t */\n\tif (saved_info_valid)\n\t{\n\t\tint32\t\tpasses_delta = strategy_passes - prev_strategy_passes;\n\n\t\tstrategy_delta = strategy_buf_id - prev_strategy_buf_id;\n\t\tstrategy_delta += (long) passes_delta * NBuffers;\n\n\t\tAssert(strategy_delta >= 0);\n\nISTM that if we can get an out-of-sync strategy_passes and\nstrategy_buf_id we'll end up with a pretty wrong strategy_delta. Which,\nI think, can cause to reset bgwriter's position:\n\t\telse\n\t\t{\n\t\t\t/*\n\t\t\t * We're behind, so skip forward to the strategy point and start\n\t\t\t * cleaning from there.\n\t\t\t */\n#ifdef BGW_DEBUG\n\t\t\telog(DEBUG2, \"bgwriter behind: bgw %u-%u strategy %u-%u delta=%ld\",\n\t\t\t\t next_passes, next_to_clean,\n\t\t\t\t strategy_passes, strategy_buf_id,\n\t\t\t\t strategy_delta);\n#endif\n\t\t\tnext_to_clean = strategy_buf_id;\n\t\t\tnext_passes = strategy_passes;\n\t\t\tbufs_to_lap = NBuffers;\n\t\t}\n\n\nWhile I think that the whole logic in BgBufferSync doesn't make a whole\nlot of sense, it does seem to me this has a fair potential to make it\nworse. In a scenario with a decent cache hit ratio (leading to high\nusagecounts) and a not that large NBuffers, we can end up up doing quite\na few passes (as in many a second), so it might not be that hard to hit\nthis.\n\n\nI am not immediately coming up with a cheap solution that doesn't do the\natomics-within-spinlock thing. The best I can come up with is using a\n64bit atomic, with the upper 32bit containing the number of passes, and\nthe lower 32bit containing the current buffer. Where the lower 32bit /\nthe buffer is handled like it currently is, i.e. we \"try\" to keep it\nbelow NBuffers. So % is only used for the \"cold\" path. That'd just add a\n64->32 bit cast in the hot path, which shouldn't be measurable. But it'd\nregress platforms without 64bit atomics substantially.\n\nWe could obviously also just rewrite the BgBufferSync() logic, so it\ndoesn't care about things like \"lapping\", but that's not an easy change.\n\n\nSo the best I can really suggest, unless we were to agree on atomics\nbeing ok inside spinlocks, is probably to just replace the spinlock with\nan lwlock. That'd perhaps cause a small slowdown for a few cases, but\nit'd make workload that e.g. use the freelist a lot (e.g. when tables\nare dropped regularly) scale better.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Jun 2020 20:03:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-04 19:33:02 -0700, Andres Freund wrote:\n> But it looks like that code is currently buggy (and looks like it always\n> has been), because we don't look at the nested argument when\n> initializing the semaphore. So we currently allocate too many\n> semaphores, without benefiting from them :(.\n\nI wrote a patch for this, and when I got around to to testing it, I\nfound that our tests currently don't pass when using both\n--disable-spinlocks and --disable-atomics. Turns out to not be related\nto the issue above, but the global barrier support added in 13.\n\nThat *reads* two 64 bit atomics in a signal handler. Which is normally\nfine, but not at all cool when atomics (or just 64 bit atomics) are\nbacked by spinlocks. Because we can \"self interrupt\" while already\nholding the spinlock.\n\nIt looks to me that that's a danger whenever 64bit atomics are backed by\nspinlocks, not just when both --disable-spinlocks and --disable-atomics\nare used. But I suspect that it's really hard to hit the tiny window of\ndanger when those options aren't used. While we have buildfarm animals\ntesting each of those separately, we don't have one that tests both\ntogether...\n\nI'm not really sure what to do about that issue. The easisest thing\nwould probably be to change the barrier generation to 32bit (which\ndoesn't have to use locks for reads in any situation). I tested doing\nthat, and it fixes the hangs for me.\n\n\nRandomly noticed while looking at the code:\n\tuint64\t\tflagbit = UINT64CONST(1) << (uint64) type;\n\nthat shouldn't be 64bit, right?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jun 2020 17:19:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wrote a patch for this, and when I got around to to testing it, I\n> found that our tests currently don't pass when using both\n> --disable-spinlocks and --disable-atomics. Turns out to not be related\n> to the issue above, but the global barrier support added in 13.\n> That *reads* two 64 bit atomics in a signal handler. Which is normally\n> fine, but not at all cool when atomics (or just 64 bit atomics) are\n> backed by spinlocks. Because we can \"self interrupt\" while already\n> holding the spinlock.\n\nThis is the sort of weird platform-specific problem that I'd prefer to\navoid by minimizing our expectations of what spinlocks can be used for.\n\n> I'm not really sure what to do about that issue. The easisest thing\n> would probably be to change the barrier generation to 32bit (which\n> doesn't have to use locks for reads in any situation).\n\nYeah, I think we need a hard rule that you can't use a spinlock in\nan interrupt handler --- which means no atomics that don't have\nnon-spinlock implementations on every platform.\n\nAt some point I think we'll have to give up --disable-spinlocks;\nit's really of pretty marginal use (how often does anyone port PG\nto a new CPU type?) and the number of weird interactions it adds\nin this area seems like more than it's worth. But of course\nrequiring 64-bit atomics is still a step too far.\n\n> Randomly noticed while looking at the code:\n> \tuint64\t\tflagbit = UINT64CONST(1) << (uint64) type;\n\nI'm surprised we didn't get any compiler warnings about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jun 2020 21:01:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-05 21:01:56 -0400, Tom Lane wrote:\n> > I'm not really sure what to do about that issue. The easisest thing\n> > would probably be to change the barrier generation to 32bit (which\n> > doesn't have to use locks for reads in any situation).\n>\n> Yeah, I think we need a hard rule that you can't use a spinlock in\n> an interrupt handler --- which means no atomics that don't have\n> non-spinlock implementations on every platform.\n\nYea, that might be the easiest thing to do. The only other thing I can\nthink of would be to mask all signals for the duration of the\natomic-using-spinlock operation. That'd make the fallback noticably more\nexpensive, but otoh, do we care enough?\n\nI think a SIGNAL_HANDLER_BEGIN(); SIGNAL_HANDLER_END(); to back an\nAssert(!InSignalHandler()); could be quite useful. Could also save\nerrno etc.\n\n\n> At some point I think we'll have to give up --disable-spinlocks; it's\n> really of pretty marginal use (how often does anyone port PG to a new\n> CPU type?) and the number of weird interactions it adds in this area\n> seems like more than it's worth.\n\nIndeed. And any new architecture one would port PG to would have good\nenough compiler intrinsics to make that trivial. I still think it'd make\nsense to have a fallback implementation using compiler intrinsics...\n\nAnd I think we should just require 32bit atomics at the same time. Would\nprobably kill gaur though.\n\n\nI did just find a longstanding bug in the spinlock emulation code:\n\nvoid\ns_init_lock_sema(volatile slock_t *lock, bool nested)\n{\n\tstatic int\tcounter = 0;\n\n\t*lock = ((++counter) % NUM_SPINLOCK_SEMAPHORES) + 1;\n}\n\nvoid\ns_unlock_sema(volatile slock_t *lock)\n{\n\tint\t\t\tlockndx = *lock;\n\n\tif (lockndx <= 0 || lockndx > NUM_SPINLOCK_SEMAPHORES)\n\t\telog(ERROR, \"invalid spinlock number: %d\", lockndx);\n\tPGSemaphoreUnlock(SpinlockSemaArray[lockndx - 1]);\n}\n\n\nI don't think it's ok that counter is a signed integer... While it maybe\nused to be unlikely that we ever have that many spinlocks, I don't think\nit's that hard anymore, because we dynamically allocate them for a lot\nof parallel query stuff. A small regression test that initializes\nenough spinlocks indeed errors out with\n2020-06-05 18:08:29.110 PDT [734946][3/2:0] ERROR: invalid spinlock number: -126\n2020-06-05 18:08:29.110 PDT [734946][3/2:0] STATEMENT: SELECT test_atomic_ops();\n\n\n\n> > Randomly noticed while looking at the code:\n> > \tuint64\t\tflagbit = UINT64CONST(1) << (uint64) type;\n>\n> I'm surprised we didn't get any compiler warnings about that.\n\nUnfortunately I don't think one can currently compile postgres with\nwarnings for \"implicit casts\" enabled :(.\n\n\n> But of course requiring 64-bit atomics is still a step too far.\n\nIf we had a 32bit compare-exchange it ought to be possible to write a\nsignal-safe emulation of 64bit atomics. I think. Something *roughly*\nlike:\n\n\ntypedef struct pg_atomic_uint64\n{\n /*\n * Meaning of state bits:\n * 0-1: current valid\n * 2-4: current proposed\n * 5: in signal handler\n * 6-31: pid of proposer\n */\n pg_atomic_uint32 state;\n\n /*\n * One current value, two different proposed values.\n */\n uint64 value[3];\n} pg_atomic_uint64;\n\nThe update protocol would be something roughly like:\n\nbool\npg_atomic_compare_exchange_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 *expected, uint64 newval)\n{\n while (true)\n {\n\tuint32 old_state = pg_atomic_read_u32(&ptr->state);\n uint32 updater_pid = PID_FROM_STATE(old_state);\n uint32 new_state;\n uint64 old_value;\n\n int proposing;\n\n /*\n * Value changed, so fail. This is obviously racy, but we'll\n * notice concurrent updates later.\n */\n if (ptr->value[VALID_FIELD(old_state)] != *expected)\n {\n return false;\n }\n\n if (updater_pid == INVALID_PID)\n {\n\n new_state = old_state;\n\n /* signal that current process is updating */\n new_state |= MyProcPid >> PID_STATE_SHIFT;\n if (InSignalHandler)\n new_state |= PROPOSER_IN_SIGNAL_HANDLER_BIT;\n\n /* set which index is being proposed */\n new_state = (new_state & ~PROPOSER_BITS) |\n NEXT_PROPOSED_FIELD(old_state, &proposing);\n\n /*\n * If we successfully can update state to contain our new\n * value, we have a right to do so, and can only be\n * interrupted by ourselves, in a signal handler.\n */\n if (!pg_atomic_compare_exchange(&ptr->state, &old_state, new_state))\n {\n /* somebody else updated, restart */\n continue;\n }\n\n old_state = new_state;\n\n /*\n * It's ok to compare the values now. If we are interrupted\n * by a signal handler, we'll notice when updating\n * state. There's no danger updating the same proposed value\n * in two processes, because they they always would get\n * offsets to propse into.\n */\n ptr->value[proposing] = newval;\n\n /* set the valid field to the one we just filled in */\n new_state = (new_state & ~VALID_FIELD_BITS) | proposed;\n /* remove ourselve as updater */\n new_state &= UPDATER_BITS;\n\n if (!pg_atomic_compare_exchange(&ptr->state, &old_state, new_state))\n {\n /*\n * Should only happen when we were interrupted by this\n * processes' handler.\n */\n Assert(!InSignalHandler);\n\n /*\n * Signal handler must have cleaned out pid as updater.\n */\n Assert(PID_FROM_STATE(old_state) != MyProcPid);\n continue;\n }\n else\n {\n return true;\n }\n }\n\telse if (PID_FROM_STATE(current_state) == MyProcPid)\n\t{\n\t /*\n\t * This should only happen when in a signal handler. We don't\n\t * currently allow nesting of signal handlers.\n\t */\n\t Assert(!(current_state & PROPOSER_IN_SIGNAL_HANDLER_BIT));\n\n /* interrupt our own non-signal-handler update */\n new_state = old_state | PROPOSER_IN_SIGNAL_HANDLER_BIT;\n\n /* set which index is being proposed */\n new_state = (new_state & ~PROPOSER_BITS) |\n NEXT_PROPOSED_FIELD(old_state, &proposing);\n\n // FIXME: assert that previous value still was what we assumed\n pg_atomic_exchange_u32(&ptr_state.state, new_state);\n }\n\telse\n\t{\n do\n {\n pg_spin_delay();\n\n current_state = pg_atomic_read_u32(&ptr->state);\n } while (PID_FROM_STATE(current_state) != INVALID_PID)\n\t}\n }\n}\n\nWhile that's not trivial, it'd not be that expensive. The happy path\nwould be two 32bit atomic operations to simulate a 64bit one.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jun 2020 19:31:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-05 21:01:56 -0400, Tom Lane wrote:\n>> At some point I think we'll have to give up --disable-spinlocks; it's\n>> really of pretty marginal use (how often does anyone port PG to a new\n>> CPU type?) and the number of weird interactions it adds in this area\n>> seems like more than it's worth.\n\n> Indeed. And any new architecture one would port PG to would have good\n> enough compiler intrinsics to make that trivial. I still think it'd make\n> sense to have a fallback implementation using compiler intrinsics...\n\n> And I think we should just require 32bit atomics at the same time. Would\n> probably kill gaur though.\n\nNot only gaur. A quick buildfarm survey finds these active members\nreporting not having 32-bit atomics:\n\n anole | 2020-06-05 11:20:17 | pgac_cv_gcc_atomic_int32_cas=no\n chipmunk | 2020-05-29 22:27:56 | pgac_cv_gcc_atomic_int32_cas=no\n curculio | 2020-06-05 22:30:06 | pgac_cv_gcc_atomic_int32_cas=no\n frogfish | 2020-05-31 13:00:25 | pgac_cv_gcc_atomic_int32_cas=no\n gaur | 2020-05-19 13:33:25 | pgac_cv_gcc_atomic_int32_cas=no\n gharial | 2020-06-05 12:41:14 | pgac_cv_gcc_atomic_int32_cas=no\n hornet | 2020-06-05 09:11:26 | pgac_cv_gcc_atomic_int32_cas=no\n hoverfly | 2020-06-05 22:06:14 | pgac_cv_gcc_atomic_int32_cas=no\n locust | 2020-06-05 10:14:29 | pgac_cv_gcc_atomic_int32_cas=no\n mandrill | 2020-06-05 09:20:03 | pgac_cv_gcc_atomic_int32_cas=no\n prairiedog | 2020-06-05 09:55:49 | pgac_cv_gcc_atomic_int32_cas=no\n\nIt looks to me like this is mostly about compiler support not the\nhardware; that doesn't make it not a problem, though. (I also\nremain skeptical about the quality of the compiler intrinsics\non non-mainstream hardware.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jun 2020 22:52:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-05 22:52:47 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-06-05 21:01:56 -0400, Tom Lane wrote:\n> >> At some point I think we'll have to give up --disable-spinlocks; it's\n> >> really of pretty marginal use (how often does anyone port PG to a new\n> >> CPU type?) and the number of weird interactions it adds in this area\n> >> seems like more than it's worth.\n>\n> > Indeed. And any new architecture one would port PG to would have good\n> > enough compiler intrinsics to make that trivial. I still think it'd make\n> > sense to have a fallback implementation using compiler intrinsics...\n>\n> > And I think we should just require 32bit atomics at the same time. Would\n> > probably kill gaur though.\n>\n> Not only gaur. A quick buildfarm survey finds these active members\n> reporting not having 32-bit atomics:\n\nHm, I don't think that's the right test. We have bespoke code to support\nmost of these, I think:\n\n\n> anole | 2020-06-05 11:20:17 | pgac_cv_gcc_atomic_int32_cas=no\n\nHas support via acc specific intrinsics.\n\n\n> chipmunk | 2020-05-29 22:27:56 | pgac_cv_gcc_atomic_int32_cas=no\n\nDoesn't have support for __atomic, but does have support for 32bit\n__sync.\n\n\n> gharial | 2020-06-05 12:41:14 | pgac_cv_gcc_atomic_int32_cas=no\n\n__sync support for both 32 and 64 bit.\n\n\n> curculio | 2020-06-05 22:30:06 | pgac_cv_gcc_atomic_int32_cas=no\n> frogfish | 2020-05-31 13:00:25 | pgac_cv_gcc_atomic_int32_cas=no\n\n__sync support for both 32 and 64 bit.\n\n\n> mandrill | 2020-06-05 09:20:03 | pgac_cv_gcc_atomic_int32_cas=no\n\n__sync support for 32, as well as as inline asm for 32bit atomics\n(although we might be able to add 64 bit).\n\n\n> hornet | 2020-06-05 09:11:26 | pgac_cv_gcc_atomic_int32_cas=no\n> hoverfly | 2020-06-05 22:06:14 | pgac_cv_gcc_atomic_int32_cas=no\n\n__sync support for both 32 and 64 bit, and we have open coded ppc asm.\n\n\n> locust | 2020-06-05 10:14:29 | pgac_cv_gcc_atomic_int32_cas=no\n> prairiedog | 2020-06-05 09:55:49 | pgac_cv_gcc_atomic_int32_cas=no\n\nWee, these don't have __sync? But I think it should be able to use the\nasm ppc implementation for 32 bit atomics.\n\n\n\n> gaur | 2020-05-19 13:33:25 | pgac_cv_gcc_atomic_int32_cas=no\n\nAs far as I understand pa-risc doesn't have any atomic instructions\nexcept for TAS.\n\n\nSo I think gaur is really the only one that'd drop.\n\n\n\n> It looks to me like this is mostly about compiler support not the\n> hardware; that doesn't make it not a problem, though. (I also\n> remain skeptical about the quality of the compiler intrinsics\n> on non-mainstream hardware.)\n\nI think that's fair enough for really old platforms, but at least for\ngcc / clang I don't think it's a huge concern for newer ones. Even if\nnot mainstream. For gcc/clang the intrinsics basically back the\nC11/C++11 \"language level\" atomics support. And those are extremely\nwidely used these days.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jun 2020 20:32:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "On 2020-06-05 17:19:26 -0700, Andres Freund wrote:\n> Hi,\n>\n> On 2020-06-04 19:33:02 -0700, Andres Freund wrote:\n> > But it looks like that code is currently buggy (and looks like it always\n> > has been), because we don't look at the nested argument when\n> > initializing the semaphore. So we currently allocate too many\n> > semaphores, without benefiting from them :(.\n>\n> I wrote a patch for this, and when I got around to to testing it, I\n> found that our tests currently don't pass when using both\n> --disable-spinlocks and --disable-atomics. Turns out to not be related\n> to the issue above, but the global barrier support added in 13.\n>\n> That *reads* two 64 bit atomics in a signal handler. Which is normally\n> fine, but not at all cool when atomics (or just 64 bit atomics) are\n> backed by spinlocks. Because we can \"self interrupt\" while already\n> holding the spinlock.\n>\n> It looks to me that that's a danger whenever 64bit atomics are backed by\n> spinlocks, not just when both --disable-spinlocks and --disable-atomics\n> are used. But I suspect that it's really hard to hit the tiny window of\n> danger when those options aren't used. While we have buildfarm animals\n> testing each of those separately, we don't have one that tests both\n> together...\n>\n> I'm not really sure what to do about that issue. The easisest thing\n> would probably be to change the barrier generation to 32bit (which\n> doesn't have to use locks for reads in any situation). I tested doing\n> that, and it fixes the hangs for me.\n>\n>\n> Randomly noticed while looking at the code:\n> \tuint64\t\tflagbit = UINT64CONST(1) << (uint64) type;\n>\n> that shouldn't be 64bit, right?\n\nAttached is a series of patches addressing these issues, of varying\nquality:\n\n1) This fixes the above mentioned issue in the global barrier code by\n using 32bit atomics. That might be fine, or it might not. I just\n included it here because otherwise the tests cannot be run fully.\n\n\n2) Fixes spinlock emulation when more than INT_MAX spinlocks are\n initialized in the lifetime of a single backend\n\n3) Add spinlock tests to normal regression tests.\n - Currently as part of test_atomic_ops. Probably not worth having a\n separate SQL function?\n - Currently contains a test for 1) that's run when the spinlock\n emulation is used. Probably too slow to actually indclude? Takes 15s\n on my computer... OTOH, it's just with --disable-spinlocks...\n - Could probably remove the current spinlock tests after this. The\n only thing they additionally test is a stuck spinlock. Since\n they're not run currently, they don't seem worth much?\n\n4) Fix the potential for deadlocks when using atomics while holding a\n spinlock, add tests for that.\n\nAny comments?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 8 Jun 2020 23:08:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-08 23:08:47 -0700, Andres Freund wrote:\n> On 2020-06-05 17:19:26 -0700, Andres Freund wrote:\n> > I wrote a patch for this, and when I got around to to testing it, I\n> > found that our tests currently don't pass when using both\n> > --disable-spinlocks and --disable-atomics. Turns out to not be related\n> > to the issue above, but the global barrier support added in 13.\n> >\n> > That *reads* two 64 bit atomics in a signal handler. Which is normally\n> > fine, but not at all cool when atomics (or just 64 bit atomics) are\n> > backed by spinlocks. Because we can \"self interrupt\" while already\n> > holding the spinlock.\n> >\n> > It looks to me that that's a danger whenever 64bit atomics are backed by\n> > spinlocks, not just when both --disable-spinlocks and --disable-atomics\n> > are used. But I suspect that it's really hard to hit the tiny window of\n> > danger when those options aren't used. While we have buildfarm animals\n> > testing each of those separately, we don't have one that tests both\n> > together...\n> >\n> > I'm not really sure what to do about that issue. The easisest thing\n> > would probably be to change the barrier generation to 32bit (which\n> > doesn't have to use locks for reads in any situation). I tested doing\n> > that, and it fixes the hangs for me.\n> >\n> >\n> > Randomly noticed while looking at the code:\n> > \tuint64\t\tflagbit = UINT64CONST(1) << (uint64) type;\n> >\n> > that shouldn't be 64bit, right?\n> \n> Attached is a series of patches addressing these issues, of varying\n> quality:\n> \n> 1) This fixes the above mentioned issue in the global barrier code by\n> using 32bit atomics. That might be fine, or it might not. I just\n> included it here because otherwise the tests cannot be run fully.\n\nHm. Looking at this again, perhaps the better fix would be to simply not\nlook at the concrete values of the barrier inside the signal handler?\nE.g. we could have a new PROCSIG_GLOBAL_BARRIER, which just triggers\nProcSignalBarrierPending to be set. And then have\nProcessProcSignalBarrier do the check that's currently in\nCheckProcSignalBarrier()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Jun 2020 12:37:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "global barrier & atomics in signal handlers (Re: Atomic operations\n within spinlocks)"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 3:37 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. Looking at this again, perhaps the better fix would be to simply not\n> look at the concrete values of the barrier inside the signal handler?\n> E.g. we could have a new PROCSIG_GLOBAL_BARRIER, which just triggers\n> ProcSignalBarrierPending to be set. And then have\n> ProcessProcSignalBarrier do the check that's currently in\n> CheckProcSignalBarrier()?\n\nThat seems like a good idea.\n\nAlso, I wonder if someone would be willing to set up a BF animal for this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 9 Jun 2020 17:04:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-09 17:04:42 -0400, Robert Haas wrote:\n> On Tue, Jun 9, 2020 at 3:37 PM Andres Freund <andres@anarazel.de> wrote:\n> > Hm. Looking at this again, perhaps the better fix would be to simply not\n> > look at the concrete values of the barrier inside the signal handler?\n> > E.g. we could have a new PROCSIG_GLOBAL_BARRIER, which just triggers\n> > ProcSignalBarrierPending to be set. And then have\n> > ProcessProcSignalBarrier do the check that's currently in\n> > CheckProcSignalBarrier()?\n> \n> That seems like a good idea.\n\nCool.\n\n\n> Also, I wonder if someone would be willing to set up a BF animal for this.\n\nYou mean having both --disable-atomics and --disable-spinlocks? If so,\nI'm planning to do that (I already have the animals that do those\nseparately, so it seems to make sense to add it to that collection).\n\nWhat do you think about my idea of having a BEGIN/END_SIGNAL_HANDLER?\nThat'd make it much easier to write assertions forbidding palloc, 64bit\natomics, ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Jun 2020 15:54:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On Fri, Jun 5, 2020 at 8:19 PM Andres Freund <andres@anarazel.de> wrote:\n> Randomly noticed while looking at the code:\n> uint64 flagbit = UINT64CONST(1) << (uint64) type;\n>\n> that shouldn't be 64bit, right?\n\nI'm going to admit ignorance here. What's the proper coding rule?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Jun 2020 07:26:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jun 5, 2020 at 8:19 PM Andres Freund <andres@anarazel.de> wrote:\n>> Randomly noticed while looking at the code:\n>> \tuint64 flagbit = UINT64CONST(1) << (uint64) type;\n>> \n>> that shouldn't be 64bit, right?\n\n> I'm going to admit ignorance here. What's the proper coding rule?\n\nThe shift distance can't exceed 64, so there's no need for it to be\nwider than int. \"type\" is an enum, so explicitly casting it to an\nintegral type seems like good practice, but int is sufficient.\n\nISTR older compilers insisting that the shift distance not be\nwider than int. But C99 doesn't seem to require that -- it only\nrestricts the value of the right operand.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jun 2020 09:51:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 6:54 PM Andres Freund <andres@anarazel.de> wrote:\n> What do you think about my idea of having a BEGIN/END_SIGNAL_HANDLER?\n> That'd make it much easier to write assertions forbidding palloc, 64bit\n> atomics, ...\n\nI must have missed the previous place where you suggested this, but I\nthink it's a good idea.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Jun 2020 13:37:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-10 07:26:32 -0400, Robert Haas wrote:\n> On Fri, Jun 5, 2020 at 8:19 PM Andres Freund <andres@anarazel.de> wrote:\n> > Randomly noticed while looking at the code:\n> > uint64 flagbit = UINT64CONST(1) << (uint64) type;\n> >\n> > that shouldn't be 64bit, right?\n> \n> I'm going to admit ignorance here. What's the proper coding rule?\n\nWell, pss_barrierCheckMask member is just 32bit, so it seems odd to\ndeclare the local variable 64bit?\n\n\tuint64\t\tflagbit = UINT64CONST(1) << (uint64) type;\n...\n\t\tpg_atomic_fetch_or_u32(&slot->pss_barrierCheckMask, flagbit);\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Jun 2020 10:26:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-10 13:37:59 -0400, Robert Haas wrote:\n> On Tue, Jun 9, 2020 at 6:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > What do you think about my idea of having a BEGIN/END_SIGNAL_HANDLER?\n> > That'd make it much easier to write assertions forbidding palloc, 64bit\n> > atomics, ...\n> \n> I must have missed the previous place where you suggested this, but I\n> think it's a good idea.\n\nhttps://www.postgresql.org/message-id/20200606023103.avzrctgv7476xj7i%40alap3.anarazel.de\n\nIt'd be neat if we could do that entirely within pqsignal(). But that'd\nrequire some additional state (I think an array of handlers, indexed by\nsignum).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Jun 2020 10:31:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 1:26 PM Andres Freund <andres@anarazel.de> wrote:\n> Well, pss_barrierCheckMask member is just 32bit, so it seems odd to\n> declare the local variable 64bit?\n>\n> uint64 flagbit = UINT64CONST(1) << (uint64) type;\n> ...\n> pg_atomic_fetch_or_u32(&slot->pss_barrierCheckMask, flagbit);\n\nOooooops.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 11 Jun 2020 15:50:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Atomic operations within spinlocks"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-09 17:04:42 -0400, Robert Haas wrote:\n> On Tue, Jun 9, 2020 at 3:37 PM Andres Freund <andres@anarazel.de> wrote:\n> > Hm. Looking at this again, perhaps the better fix would be to simply not\n> > look at the concrete values of the barrier inside the signal handler?\n> > E.g. we could have a new PROCSIG_GLOBAL_BARRIER, which just triggers\n> > ProcSignalBarrierPending to be set. And then have\n> > ProcessProcSignalBarrier do the check that's currently in\n> > CheckProcSignalBarrier()?\n> \n> That seems like a good idea.\n\nWhat do you think about 0002?\n\n\nWith regard to the cost of the expensive test in 0003, I'm somewhat\ninclined to add that to the buildfarm for a few days and see how it\nactually affects the few bf animals without atomics. We can rip it out\nafter we got some additional coverage (or leave it in if it turns out to\nbe cheap enough in comparison).\n\n\n> Also, I wonder if someone would be willing to set up a BF animal for this.\n\nFWIW, I've requested a buildfarm animal id for this a few days ago, but\nhaven't received a response yet...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 15 Jun 2020 18:37:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 9:37 PM Andres Freund <andres@anarazel.de> wrote:\n> What do you think about 0002?\n>\n> With regard to the cost of the expensive test in 0003, I'm somewhat\n> inclined to add that to the buildfarm for a few days and see how it\n> actually affects the few bf animals without atomics. We can rip it out\n> after we got some additional coverage (or leave it in if it turns out to\n> be cheap enough in comparison).\n\nI looked over these patches briefly today. I don't have any objection\nto 0001 or 0002. I think 0003 looks a little strange: it seems to be\ntesting things that might be implementation details of other things,\nand I'm not sure that's really correct. In particular:\n\n+ /* and that \"contended\" acquisition works */\n+ s_lock(&struct_w_lock.lock, \"testfile\", 17, \"testfunc\");\n+ S_UNLOCK(&struct_w_lock.lock);\n\nI didn't think we had formally promised that s_lock() is actually\ndefined or working on all platforms.\n\nMore generally, I don't think it's entirely clear what all of these\ntests are testing. Like, I can see that data_before and data_after are\nintended to test that the lock actually fits in the space allowed for\nit, but at the same time, I think empty implementations of all of\nthese functions would pass regression, as would many horribly or\nsubtly buggy implementations. For example, consider this:\n\n+ /* test basic operations via the SpinLock* API */\n+ SpinLockInit(&struct_w_lock.lock);\n+ SpinLockAcquire(&struct_w_lock.lock);\n+ SpinLockRelease(&struct_w_lock.lock);\n\nWhat does it look like for this test to fail? I guess one of those\noperations has to fail an assert or hang forever, because it's not\nlike we're checking the return value. So I feel like the intent of\nthese tests isn't entirely clear, and should probably be explained\nbetter, at a minimum -- and perhaps we should think harder about what\na good testing framework would look like. I would rather have tests\nthat either pass or fail and report a result explicitly, rather than\ntests that rely on hangs or crashes.\n\nParenthetically, \"cyle\" != \"cycle\".\n\nI don't have any real complaints about the functionality of 0004 on a\nquick read-through, but I'm again a bit skeptical of the tests. Not as\nmuch as with 0003, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 Jun 2020 14:59:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On 2020-Jun-15, Andres Freund wrote:\n\n> > Also, I wonder if someone would be willing to set up a BF animal for this.\n> \n> FWIW, I've requested a buildfarm animal id for this a few days ago, but\n> haven't received a response yet...\n\nI did send it out, with name rorqual -- didn't you get that? Will send\nthe secret separately.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 15:20:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-16 14:59:19 -0400, Robert Haas wrote:\n> On Mon, Jun 15, 2020 at 9:37 PM Andres Freund <andres@anarazel.de> wrote:\n> > What do you think about 0002?\n> >\n> > With regard to the cost of the expensive test in 0003, I'm somewhat\n> > inclined to add that to the buildfarm for a few days and see how it\n> > actually affects the few bf animals without atomics. We can rip it out\n> > after we got some additional coverage (or leave it in if it turns out to\n> > be cheap enough in comparison).\n>\n> I looked over these patches briefly today. I don't have any objection\n> to 0001 or 0002.\n\nCool. I was mainly interested in those for now.\n\n\n> I think 0003 looks a little strange: it seems to be\n> testing things that might be implementation details of other things,\n> and I'm not sure that's really correct. In particular:\n\nMy main motivation was to have something that runs more often than than\nthe embeded test in s_lock.c's that nobody ever runs (they wouldn't even\npass with disabled spinlocks, as S_LOCK_FREE isn't implemented).\n\n\n> + /* and that \"contended\" acquisition works */\n> + s_lock(&struct_w_lock.lock, \"testfile\", 17, \"testfunc\");\n> + S_UNLOCK(&struct_w_lock.lock);\n>\n> I didn't think we had formally promised that s_lock() is actually\n> defined or working on all platforms.\n\nHm? Isn't s_lock the, as its comment says, \"platform-independent portion\nof waiting for a spinlock.\"? I also don't think we need to purely\nfollow external APIs in internal tests.\n\n\n> More generally, I don't think it's entirely clear what all of these\n> tests are testing. Like, I can see that data_before and data_after are\n> intended to test that the lock actually fits in the space allowed for\n> it, but at the same time, I think empty implementations of all of\n> these functions would pass regression, as would many horribly or\n> subtly buggy implementations.\n\nSure, there's a lot that'd pass. But it's more than we had before. It\ndid catch a bug much quicker than I'd have otherwise found it, FWIW.\n\nI don't think an empty implementation would pass btw, as long as TAS is\ndefined.\n\n> So I feel like the intent of these tests isn't entirely clear, and\n> should probably be explained better, at a minimum -- and perhaps we\n> should think harder about what a good testing framework would look\n> like.\n\nYea, we could use something better. But I don't see that happening\nquickly, and having something seems better than nothing.\n\n\n> I would rather have tests that either pass or fail and report a result\n> explicitly, rather than tests that rely on hangs or crashes.\n\nThat seems quite hard to achieve. I really just wanted to have something\nI can do some very basic tests to catch issues quicker.\n\n\nThe atomics tests found numerous issues btw, despite also not testing\nconcurrency.\n\n\nI think we generally have way too few of such trivial tests. They can\nfind plenty \"real world\" issues, but more importantly make it much\nquicker to iterate when working on some piece of code.\n\n\n> I don't have any real complaints about the functionality of 0004 on a\n> quick read-through, but I'm again a bit skeptical of the tests. Not as\n> much as with 0003, though.\n\nWithout the tests I couldn't even reproduce a deadlock due to the\nnesting. So they imo are pretty essential?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jun 2020 12:27:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think 0003 looks a little strange: it seems to be\n> > testing things that might be implementation details of other things,\n> > and I'm not sure that's really correct. In particular:\n>\n> My main motivation was to have something that runs more often than than\n> the embeded test in s_lock.c's that nobody ever runs (they wouldn't even\n> pass with disabled spinlocks, as S_LOCK_FREE isn't implemented).\n\nSure, that makes sense.\n\n> > + /* and that \"contended\" acquisition works */\n> > + s_lock(&struct_w_lock.lock, \"testfile\", 17, \"testfunc\");\n> > + S_UNLOCK(&struct_w_lock.lock);\n> >\n> > I didn't think we had formally promised that s_lock() is actually\n> > defined or working on all platforms.\n>\n> Hm? Isn't s_lock the, as its comment says, \"platform-independent portion\n> of waiting for a spinlock.\"? I also don't think we need to purely\n> follow external APIs in internal tests.\n\nI feel like we at least didn't use to use that on all platforms, but I\nmight be misremembering. It seems odd and confusing that we have both\nS_LOCK() and s_lock(), anyway. Differentiating functions based on case\nis not great practice.\n\n> Sure, there's a lot that'd pass. But it's more than we had before. It\n> did catch a bug much quicker than I'd have otherwise found it, FWIW.\n>\n> I don't think an empty implementation would pass btw, as long as TAS is\n> defined.\n\nFair enough.\n\n> Yea, we could use something better. But I don't see that happening\n> quickly, and having something seems better than nothing.\n>\n> That seems quite hard to achieve. I really just wanted to have something\n> I can do some very basic tests to catch issues quicker.\n>\n> The atomics tests found numerous issues btw, despite also not testing\n> concurrency.\n>\n> I think we generally have way too few of such trivial tests. They can\n> find plenty \"real world\" issues, but more importantly make it much\n> quicker to iterate when working on some piece of code.\n>\n> Without the tests I couldn't even reproduce a deadlock due to the\n> nesting. So they imo are pretty essential?\n\nI'm not telling you not to commit these; I'm just more skeptical of\nwhether they are the right approach than you seem to be. But that's\nOK: people can like different things, and I don't know exactly what\nwould be better anyway.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:34:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-17 10:34:31 -0400, Robert Haas wrote:\n> On Tue, Jun 16, 2020 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I think 0003 looks a little strange: it seems to be\n> > > testing things that might be implementation details of other things,\n> > > and I'm not sure that's really correct. In particular:\n> > Hm? Isn't s_lock the, as its comment says, \"platform-independent portion\n> > of waiting for a spinlock.\"? I also don't think we need to purely\n> > follow external APIs in internal tests.\n> \n> I feel like we at least didn't use to use that on all platforms, but I\n> might be misremembering.\n\nThere's only one definition of S_LOCK, and s_lock is the only spinlock\nrelated user of perform_spin_delay(). So I don't think so?\n\n\n> It seems odd and confusing that we have both\n> S_LOCK() and s_lock(), anyway. Differentiating functions based on case\n> is not great practice.\n\nIt's a terrible idea, yes. Since we don't actually have any non-default\nimplementations of S_LOCK, perhaps we should just rip it out? It'd\nprobably be clearer if SpinLockAcquire() would be what uses TAS() and\nfalls back to s_lock (best renamed to s_lock_slowpath or such).\n\nIt'd perhaps also be good to make SpinLockAcquire() a static inline\ninstead of a #define, so it can be properly attributed in debuggers and\nprofilers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jun 2020 11:33:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 2:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > It seems odd and confusing that we have both\n> > S_LOCK() and s_lock(), anyway. Differentiating functions based on case\n> > is not great practice.\n>\n> It's a terrible idea, yes. Since we don't actually have any non-default\n> implementations of S_LOCK, perhaps we should just rip it out?\n\nI think we should rip out the conditional nature of the definition and\nfix the comments. I don't think I prefer getting rid of it completely.\n\nBut then again on the other hand, what's the point of this crap anyway:\n\n#define SpinLockInit(lock) S_INIT_LOCK(lock)\n#define SpinLockAcquire(lock) S_LOCK(lock)\n#define SpinLockRelease(lock) S_UNLOCK(lock)\n#define SpinLockFree(lock) S_LOCK_FREE(lock)\n\nThis seems like it's straight out of the department of pointless\nabstraction layers. Maybe we should remove all of the S_WHATEVER()\nstuff and just define SpinLockAcquire() where we currently define\nS_LOCK(), SpinLockRelease() where we currently define S_UNLOCK(), etc.\n\nAnd, as you say, make them static inline functions while we're at it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 15:27:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> This seems like it's straight out of the department of pointless\n> abstraction layers. Maybe we should remove all of the S_WHATEVER()\n> stuff and just define SpinLockAcquire() where we currently define\n> S_LOCK(), SpinLockRelease() where we currently define S_UNLOCK(), etc.\n> And, as you say, make them static inline functions while we're at it.\n\nThe macros are kind of necessary unless you want to make s_lock.h\na bunch messier, because we use #ifdef tests on them.\n\nWe could get rid of the double layer of macros, sure, but TBH that\nsounds like change for the sake of change rather than a useful\nimprovement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 15:45:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > This seems like it's straight out of the department of pointless\n> > abstraction layers. Maybe we should remove all of the S_WHATEVER()\n> > stuff and just define SpinLockAcquire() where we currently define\n> > S_LOCK(), SpinLockRelease() where we currently define S_UNLOCK(), etc.\n> > And, as you say, make them static inline functions while we're at it.\n>\n> The macros are kind of necessary unless you want to make s_lock.h\n> a bunch messier, because we use #ifdef tests on them.\n\nWhere?\n\n> We could get rid of the double layer of macros, sure, but TBH that\n> sounds like change for the sake of change rather than a useful\n> improvement.\n\nReally? Multiple layers of macros seem like they pretty clearly make\nthe source code harder to understand. There are plenty of places where\nsuch devices are necessary for one reason or another, but it doesn't\nseem like something we ought to keep around for no reason.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 16:05:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 17, 2020 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The macros are kind of necessary unless you want to make s_lock.h\n>> a bunch messier, because we use #ifdef tests on them.\n\n> Where?\n\nSee the \"Default Definitions\", down near the end.\n\n>> We could get rid of the double layer of macros, sure, but TBH that\n>> sounds like change for the sake of change rather than a useful\n>> improvement.\n\n> Really? Multiple layers of macros seem like they pretty clearly make\n> the source code harder to understand. There are plenty of places where\n> such devices are necessary for one reason or another, but it doesn't\n> seem like something we ought to keep around for no reason.\n\nI wouldn't object to making the outer-layer macros in spin.h into static\ninlines; as mentioned, that might have some debugging benefits. But I\nthink messing with s_lock.h for marginal cosmetic reasons is a foolish\nidea. For one thing, there's no way whoever does it can verify all the\narchitecture-specific stanzas. (I don't think we even have all of them\ncovered in the buildfarm.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 17:29:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> See the \"Default Definitions\", down near the end.\n\nOh, yeah. I didn't realize you meant just inside this file itself.\nThat is slightly awkward. I initially thought there was no problem\nbecause there seem to be no remaining non-default definitions of\nS_LOCK, but I now see that the other macros still do have some\nnon-default definitions. Hmm.\n\n> > Really? Multiple layers of macros seem like they pretty clearly make\n> > the source code harder to understand. There are plenty of places where\n> > such devices are necessary for one reason or another, but it doesn't\n> > seem like something we ought to keep around for no reason.\n>\n> I wouldn't object to making the outer-layer macros in spin.h into static\n> inlines; as mentioned, that might have some debugging benefits. But I\n> think messing with s_lock.h for marginal cosmetic reasons is a foolish\n> idea. For one thing, there's no way whoever does it can verify all the\n> architecture-specific stanzas. (I don't think we even have all of them\n> covered in the buildfarm.)\n\nIt would be a pretty mechanical change to use a separate preprocessor\nsymbol for the conditional and just define the static inline functions\non the spot. There might be one or two goofs, but if those platforms\nare not in the buildfarm, they're either dead and they don't matter,\nor someone will tell us what we did wrong. I don't know. I don't have\na huge desire to spend time cleaning up s_lock.h and I do think it's\nbetter not to churn stuff around just for the heck of it, but I'm also\nsympathetic to Andres's point that using macros everywhere is\ndebugger-unfriendly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 11:42:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 17, 2020 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wouldn't object to making the outer-layer macros in spin.h into static\n>> inlines; as mentioned, that might have some debugging benefits. But I\n>> think messing with s_lock.h for marginal cosmetic reasons is a foolish\n>> idea. For one thing, there's no way whoever does it can verify all the\n>> architecture-specific stanzas. (I don't think we even have all of them\n>> covered in the buildfarm.)\n\n> It would be a pretty mechanical change to use a separate preprocessor\n> symbol for the conditional and just define the static inline functions\n> on the spot. There might be one or two goofs, but if those platforms\n> are not in the buildfarm, they're either dead and they don't matter,\n> or someone will tell us what we did wrong. I don't know. I don't have\n> a huge desire to spend time cleaning up s_lock.h and I do think it's\n> better not to churn stuff around just for the heck of it, but I'm also\n> sympathetic to Andres's point that using macros everywhere is\n> debugger-unfriendly.\n\nSure, but wouldn't making the SpinLockAcquire layer into static inlines be\nsufficient to address that point, with no need to touch s_lock.h at all?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 11:59:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Sure, but wouldn't making the SpinLockAcquire layer into static inlines be\n> sufficient to address that point, with no need to touch s_lock.h at all?\n\nI mean, wouldn't you then end up with a bunch of 1-line functions\nwhere you can step into the function but not through whatever\nindividual things it does?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:21:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 18, 2020 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Sure, but wouldn't making the SpinLockAcquire layer into static inlines be\n>> sufficient to address that point, with no need to touch s_lock.h at all?\n\n> I mean, wouldn't you then end up with a bunch of 1-line functions\n> where you can step into the function but not through whatever\n> individual things it does?\n\nNot following your point. The s_lock.h implementations tend to be either\nsimple C statements (\"*lock = 0\") or asm blocks; if you feel a need to\nstep through them you're going to be resorting to \"si\" anyway.\n\nI think the main usefulness of doing anything here would be (a) separating\nthe spinlock infrastructure from callers and (b) ensuring that we have a\ndeclared argument type, and single-evaluation semantics, for the spinlock\nfunction parameters. Both of those are adequately addressed by fixing\nspin.h, IMO anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:29:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-18 12:29:40 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Jun 18, 2020 at 11:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Sure, but wouldn't making the SpinLockAcquire layer into static inlines be\n> >> sufficient to address that point, with no need to touch s_lock.h at all?\n> \n> > I mean, wouldn't you then end up with a bunch of 1-line functions\n> > where you can step into the function but not through whatever\n> > individual things it does?\n> \n> Not following your point. The s_lock.h implementations tend to be either\n> simple C statements (\"*lock = 0\") or asm blocks; if you feel a need to\n> step through them you're going to be resorting to \"si\" anyway.\n\nI agree on that.\n\n\nI do think it'd be better to not have the S_LOCK macro though (but have\nTAS/TAS_SPIN as we do now). And instead have SpinLockAcquire() call\ns_lock() (best renamed to something a bit more meaningful). Makes the\ncode a bit easier to understand (no S_LOCK vs s_lock) and yields simpler\nmacros.\n\nThere's currently no meaningful ifdefs for S_LOCK (in contrast to\ne.g. S_UNLOCK), so I don't see it making s_lock.h more complicated.\n\nI think part of the issue here is that the naming of the s_lock exposed\nmacros is confusing. We have S_INIT_LOCK, TAS, SPIN_DELAY, TAS,\nTAS_SPIN, S_UNLOCK, S_LOCK_FREE that are essentially hardware\ndependent. But then there's S_LOCK which basically isn't.\n\nIt may have made some sense when the file was originally written, if one\nimagines that S_ is the only external API, and the rest is\nimplementation. But given that s_lock() uses TAS() directly (and says\n\"platform-independent portion of waiting for a spinlock\"), and that we\nonly have one definition of S_UNLOCK that doesn't seem quite right.\n\n\n> I think the main usefulness of doing anything here would be (a) separating\n> the spinlock infrastructure from callers and (b) ensuring that we have a\n> declared argument type, and single-evaluation semantics, for the spinlock\n> function parameters. Both of those are adequately addressed by fixing\n> spin.h, IMO anyway.\n\nThe abstraction point made me grep for includes of s_lock.h. Seems we\nhave some unnecessary includes of s_lock.h.\n\nlwlock.h doesn't need to include spinlock related things anymore, and\nhasn't for years, the spinlocks are replaced with atomics. That seems\npretty obvious. I'm gonna fix that in master, unless somebody thinks we\nshould do that more widely?\n\nBut we also have s_lock.h includes in condition_variable.h and\nmain.c. Seems the former should instead include spin.h and the latter\ninclude should just be removed?\n\n\nAll of this however makes me wonder whether it's worth polishing\nspinlocks instead of just ripping them out. Obviously we'd still need a\nfallback for atomics, but it not be hard to just replace the\nspinlock use with either \"smaller\" atomics or semaphores.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Jun 2020 11:30:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-16 15:20:11 -0400, Alvaro Herrera wrote:\n> On 2020-Jun-15, Andres Freund wrote:\n> \n> > > Also, I wonder if someone would be willing to set up a BF animal for this.\n> > \n> > FWIW, I've requested a buildfarm animal id for this a few days ago, but\n> > haven't received a response yet...\n> \n> I did send it out, with name rorqual -- didn't you get that? Will send\n> the secret separately.\n\nThat animal is now live. Will take a bit for all branches to report in\nthough.\n\nNeed to get faster storage for my buildfarm animal host...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Jun 2020 15:19:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: global barrier & atomics in signal handlers (Re: Atomic\n operations within spinlocks)"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen libpq is used to COPY data to the server, it doesn't properly\nhandle errors.\n\nAn easy way to trigger the problem is to start pgbench -i with a\nsufficiently large scale, and then just shut the server down. pgbench\nwill happily use 100% of the cpu trying to send data to the server, even\nthough libpq knows that the connection is broken.\n\nIt can't even be cancelled using ctrl-c anymore, because the cancel\nrequest cannot be sent:\n\nandres@awork3:~/src/postgresql$ pgbench -i -s 4000 -q\ndropping old tables...\ncreating tables...\ngenerating data (client-side)...\n80889300 of 400000000 tuples (20%) done (elapsed 85.00 s, remaining 335.33 s)\n^CCould not send cancel request: PQcancel() -- connect() failed: No such file or directory\n\n\nThis is partially an old problem, and partially got recently\nworse. Before the below commit we detected terminated connections, but\nwe didn't handle copy failing.\n\n\nThe failure to handle terminated connections originates in:\n\ncommit 1f39a1c0641531e0462a4822f2dba904c5d4d699\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2019-03-19 16:20:20 -0400\n\n Restructure libpq's handling of send failures.\n\n\nThe problem is basically that pqSendSome() returns *success* in all\nfailure cases. Both when a failure is already known:\n\n+ /*\n+ * If we already had a write failure, we will never again try to send data\n+ * on that connection. Even if the kernel would let us, we've probably\n+ * lost message boundary sync with the server. conn->write_failed\n+ * therefore persists until the connection is reset, and we just discard\n+ * all data presented to be written.\n+ */\n+ if (conn->write_failed)\n+ {\n+ /* conn->write_err_msg should be set up already */\n+ conn->outCount = 0;\n+ return 0;\n+ }\n+\n\nand when initially \"diagnosing\" the failure:\n\t\t\t/* Anything except EAGAIN/EWOULDBLOCK/EINTR is trouble */\n\t\t\tswitch (SOCK_ERRNO)\n...\n\t\t\t\t\t/* Discard queued data; no chance it'll ever be sent */\n\t\t\t\t\tconn->outCount = 0;\n\t\t\t\t\treturn 0;\n\nThe idea of the above commit was:\n Instead, let's fix this in a more general fashion by eliminating\n pqHandleSendFailure altogether, and instead arranging to postpone\n all reports of send failures in libpq until after we've made an\n attempt to read and process server messages. The send failure won't\n be reported at all if we find a server message or detect input EOF.\n\nbut that doesn't work here, because we never process the error\nmessage. There's no code in pqParseInput3() to process server messages\nwhile doing copy.\n\n\nI'm honestly a bit baffled. How can we not have noticed that COPY FROM\nSTDIN doesn't handle errors before the input is exhausted? It's not just\npgbench, it's psql (and I asume pg_restore) etc as well.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Jun 2020 13:12:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "libpq copy error handling busted"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> When libpq is used to COPY data to the server, it doesn't properly\n> handle errors.\n> This is partially an old problem, and partially got recently\n> worse. Before the below commit we detected terminated connections, but\n> we didn't handle copy failing.\n\nYeah. After poking at that for a little bit, there are at least three\nproblems:\n\n* pqSendSome() is responsible not only for pushing out data, but for\ncalling pqReadData in any situation where it can't get rid of the data\npromptly. 1f39a1c06 overlooked that requirement, and the upshot is\nthat we don't necessarily notice that the connection is broken (it's\npqReadData's job to detect that). Putting a pqReadData call into\nthe early-exit path helps, but doesn't fix things completely.\n\n* The more longstanding problem is that the PQputCopyData code path\ndoesn't have any mechanism for consuming an 'E' (error) message\nonce pqReadData has collected it. AFAICS that's ancient. (It does\nnot affect the behavior of this case if you use an immediate-mode\nshutdown, because then the backend never issues an 'E' message;\nbut it does matter in a fast-mode shutdown.) I think that the\nidea was to let the client dump all its copy data and then report\nthe error message when PQendCopy is called, but as you say, that's\nnone too friendly when there's gigabytes of data involved. I'm\nnot sure we can do anything about this without effectively changing\nthe client API for copy errors, though.\n\n* As for control-C not getting out of it: there is\n\n\t\tif (CancelRequested)\n\t\t\tbreak;\n\nin pgbench's loop, but this does nothing in this scenario because\nfe-utils/cancel.c only sets that flag when it successfully sends a\nCancel ... which it certainly cannot if the postmaster is gone.\nI suspect that this may be relatively recent breakage. It doesn't look\ntoo well thought out, in any case. The places that are testing this\nflag look like they'd rather not be bothered with the fine point of\nwhether the cancel request actually went anywhere. (And aside from this\nissue, I see no mechanism for that flag to become unset once it's set.\nCurrent users of cancel.c probably don't care, but we'd have noticed if\nwe tried to make psql use it.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jun 2020 18:41:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "I wrote:\n> * pqSendSome() is responsible not only for pushing out data, but for\n> calling pqReadData in any situation where it can't get rid of the data\n> promptly. 1f39a1c06 overlooked that requirement, and the upshot is\n> that we don't necessarily notice that the connection is broken (it's\n> pqReadData's job to detect that). Putting a pqReadData call into\n> the early-exit path helps, but doesn't fix things completely.\n\nAh, it's better if I put the pqReadData call into *both* the paths\nwhere 1f39a1c06 made pqSendSome give up. The attached patch seems\nto fix the issue for the \"pgbench -i\" scenario, with either fast-\nor immediate-mode server stop. I tried it with and without SSL too,\njust to see. Still, it's not clear to me whether this might worsen\nany of the situations we discussed in the lead-up to 1f39a1c06 [1].\nThomas, are you in a position to redo any of that testing?\n\n> * The more longstanding problem is that the PQputCopyData code path\n> doesn't have any mechanism for consuming an 'E' (error) message\n> once pqReadData has collected it.\n\nAt least with pgbench's approach (die immediately on PQputline failure)\nthis isn't very relevant once we apply the attached. Perhaps we should\nrevisit this behavior anyway, but I'd be afraid to back-patch a change\nof that nature.\n\n> * As for control-C not getting out of it: there is\n> \t\tif (CancelRequested)\n> \t\t\tbreak;\n> in pgbench's loop, but this does nothing in this scenario because\n> fe-utils/cancel.c only sets that flag when it successfully sends a\n> Cancel ... which it certainly cannot if the postmaster is gone.\n\nI'll send a patch for this later.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAEepm%3D2n6Nv%2B5tFfe8YnkUm1fXgvxR0Mm1FoD%2BQKG-vLNGLyKg%40mail.gmail.com",
"msg_date": "Wed, 03 Jun 2020 21:35:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "On Thu, Jun 4, 2020 at 1:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > * pqSendSome() is responsible not only for pushing out data, but for\n> > calling pqReadData in any situation where it can't get rid of the data\n> > promptly. 1f39a1c06 overlooked that requirement, and the upshot is\n> > that we don't necessarily notice that the connection is broken (it's\n> > pqReadData's job to detect that). Putting a pqReadData call into\n> > the early-exit path helps, but doesn't fix things completely.\n>\n> Ah, it's better if I put the pqReadData call into *both* the paths\n> where 1f39a1c06 made pqSendSome give up. The attached patch seems\n> to fix the issue for the \"pgbench -i\" scenario, with either fast-\n> or immediate-mode server stop. I tried it with and without SSL too,\n> just to see. Still, it's not clear to me whether this might worsen\n> any of the situations we discussed in the lead-up to 1f39a1c06 [1].\n> Thomas, are you in a position to redo any of that testing?\n\nYes, sure. The testing consisted of running on a system with OpenSSL\n1.1.1a (older versions didn't have the problem). It originally showed\nup on eelpout, a very underpowered build farm machine running Linux on\nARM64, but then later we worked out we could make it happen on a Mac\nor any other Linux system if we had bad enough luck or if we added a\nsleep in a particular spot. We could do it with psql running in a\nloop using a bad certificate from the testing setup stuff, as shown\nhere:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGJafyTgpsYBgQGt1EX0O8UnL4VGHSc7J0KZyMH4_jPGBw%40mail.gmail.com\n\nI don't have access to eelpout from where I am right now, but I'll try\nthat test now on the Debian 10 amd64 system I have here. OpenSSL has\nsince moved on to 1.1.1d-0+deb10u3, but that should be fine, the\ntriggering change was the move to TLS1.3 so let me see what happens if\nI do that with your patch applied...\n\n\n\n\n> > * The more longstanding problem is that the PQputCopyData code path\n> > doesn't have any mechanism for consuming an 'E' (error) message\n> > once pqReadData has collected it.\n>\n> At least with pgbench's approach (die immediately on PQputline failure)\n> this isn't very relevant once we apply the attached. Perhaps we should\n> revisit this behavior anyway, but I'd be afraid to back-patch a change\n> of that nature.\n>\n> > * As for control-C not getting out of it: there is\n> > if (CancelRequested)\n> > break;\n> > in pgbench's loop, but this does nothing in this scenario because\n> > fe-utils/cancel.c only sets that flag when it successfully sends a\n> > Cancel ... which it certainly cannot if the postmaster is gone.\n>\n> I'll send a patch for this later.\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/flat/CAEepm%3D2n6Nv%2B5tFfe8YnkUm1fXgvxR0Mm1FoD%2BQKG-vLNGLyKg%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 4 Jun 2020 13:53:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "On Thu, Jun 4, 2020 at 1:53 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jun 4, 2020 at 1:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Ah, it's better if I put the pqReadData call into *both* the paths\n> > where 1f39a1c06 made pqSendSome give up. The attached patch seems\n> > to fix the issue for the \"pgbench -i\" scenario, with either fast-\n> > or immediate-mode server stop. I tried it with and without SSL too,\n> > just to see. Still, it's not clear to me whether this might worsen\n> > any of the situations we discussed in the lead-up to 1f39a1c06 [1].\n> > Thomas, are you in a position to redo any of that testing?\n\nIt seems to be behave correctly in that scenario.\n\nHere's what I tested. First, I put this into pgdata/postgresql.conf:\n\nssl=on\nssl_ca_file='root+client_ca.crt'\nssl_cert_file='server-cn-only.crt'\nssl_key_file='server-cn-only.key'\nssl_crl_file='root+client.crl'\nssl_min_protocol_version='TLSv1.2'\nssl_max_protocol_version='TLSv1.1'\nssl_min_protocol_version='TLSv1.2'\nssl_max_protocol_version=''\n\nI copied the named files from src/test/ssl/ssl/ into pgdata, and I ran\nchmod 600 on the .key file.\n\nI put this into pgdata/pg_hba.conf at the top:\n\nhostssl all all 127.0.0.1/32 cert clientcert=verify-full\n\nI made a copy of src/test/ssl/ssl/client-revoked.key and ran chmod 600 on it.\n\nNow on unpatched master I get:\n\n$ psql \"host=127.0.0.1 port=5432 dbname=postgres user=tmunro\nsslcert=src/test/ssl/ssl/client-revoked.crt sslkey=client-revoked.key\nsslmode=require\"\npsql: error: could not connect to server: SSL error: sslv3 alert\ncertificate revoked\n\nIt's the same if I add in this sleep in fe-connect.c:\n\n+sleep(1);\n /*\n * Send the startup packet.\n *\n\nIf I revert 1f39a1c0641531e0462a4822f2dba904c5d4d699 \"Restructure\nlibpq's handling of send failures.\", I get the error that eelpout\nshowed intermittently:\n\npsql: error: could not connect to server: server closed the connection\nunexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\ncould not send startup packet: Connection reset by peer\n\nI go back to master, and apply your patch. I get the expected error:\n\npsql: error: could not connect to server: SSL error: sslv3 alert\ncertificate revoked\n\n\n",
"msg_date": "Thu, 4 Jun 2020 15:36:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "On Thu, Jun 4, 2020 at 3:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's what I tested.\n\nIn passing, I noticed that this:\n\n$ psql ...\npsql: error: could not connect to server: private key file\n\"src/test/ssl/ssl/client-revoked.key\" has group or world access;\npermissions should be u=rw (0600) or less\n\n... leads to this nonsensical error message on the server:\n\n2020-06-04 16:03:11.547 NZST [7765] LOG: could not accept SSL\nconnection: Success\n\n\n",
"msg_date": "Thu, 4 Jun 2020 16:05:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "On Thu, Jun 4, 2020 at 5:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Thu, Jun 4, 2020 at 1:53 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > On Thu, Jun 4, 2020 at 1:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Ah, it's better if I put the pqReadData call into *both* the paths\n> > > where 1f39a1c06 made pqSendSome give up. The attached patch seems\n> > > to fix the issue for the \"pgbench -i\" scenario, with either fast-\n> > > or immediate-mode server stop. I tried it with and without SSL too,\n> > > just to see. Still, it's not clear to me whether this might worsen\n> > > any of the situations we discussed in the lead-up to 1f39a1c06 [1].\n> > > Thomas, are you in a position to redo any of that testing?\n>\n> It seems to be behave correctly in that scenario.\n>\n> Here's what I tested. First, I put this into pgdata/postgresql.conf:\n>\n> ssl=on\n> ssl_ca_file='root+client_ca.crt'\n> ssl_cert_file='server-cn-only.crt'\n> ssl_key_file='server-cn-only.key'\n> ssl_crl_file='root+client.crl'\n> ssl_min_protocol_version='TLSv1.2'\n> ssl_max_protocol_version='TLSv1.1'\n> ssl_min_protocol_version='TLSv1.2'\n> ssl_max_protocol_version=''\n>\n> I copied the named files from src/test/ssl/ssl/ into pgdata, and I ran\n> chmod 600 on the .key file.\n>\n> I put this into pgdata/pg_hba.conf at the top:\n>\n> hostssl all all 127.0.0.1/32 cert clientcert=verify-full\n>\n> I made a copy of src/test/ssl/ssl/client-revoked.key and ran chmod 600 on\n> it.\n>\n\nWould it be feasible to capture this in a sort of a regression (TAP?) test?\n\n--\nAlex\n\nOn Thu, Jun 4, 2020 at 5:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Thu, Jun 4, 2020 at 1:53 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jun 4, 2020 at 1:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Ah, it's better if I put the pqReadData call into *both* the paths\n> > where 1f39a1c06 made pqSendSome give up. The attached patch seems\n> > to fix the issue for the \"pgbench -i\" scenario, with either fast-\n> > or immediate-mode server stop. I tried it with and without SSL too,\n> > just to see. Still, it's not clear to me whether this might worsen\n> > any of the situations we discussed in the lead-up to 1f39a1c06 [1].\n> > Thomas, are you in a position to redo any of that testing?\n\nIt seems to be behave correctly in that scenario.\n\nHere's what I tested. First, I put this into pgdata/postgresql.conf:\n\nssl=on\nssl_ca_file='root+client_ca.crt'\nssl_cert_file='server-cn-only.crt'\nssl_key_file='server-cn-only.key'\nssl_crl_file='root+client.crl'\nssl_min_protocol_version='TLSv1.2'\nssl_max_protocol_version='TLSv1.1'\nssl_min_protocol_version='TLSv1.2'\nssl_max_protocol_version=''\n\nI copied the named files from src/test/ssl/ssl/ into pgdata, and I ran\nchmod 600 on the .key file.\n\nI put this into pgdata/pg_hba.conf at the top:\n\nhostssl all all 127.0.0.1/32 cert clientcert=verify-full\n\nI made a copy of src/test/ssl/ssl/client-revoked.key and ran chmod 600 on it.Would it be feasible to capture this in a sort of a regression (TAP?) test?--Alex",
"msg_date": "Thu, 4 Jun 2020 08:22:15 +0200",
"msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "On Thu, Jun 4, 2020 at 6:22 PM Oleksandr Shulgin\n<oleksandr.shulgin@zalando.de> wrote:\n> On Thu, Jun 4, 2020 at 5:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Here's what I tested. First, I put this into pgdata/postgresql.conf:\n\n> Would it be feasible to capture this in a sort of a regression (TAP?) test?\n\nIf I'm remembering correctly, it wouldn't work on Windows. After\nyou've had an error sending to a socket, you can't receive (even if\nthere was something sent to you earlier). At least that's how it\nseemed from the experiments on that other thread. The other problem\nis that it requires inserting a sleep into the code...\n\n\n",
"msg_date": "Thu, 4 Jun 2020 21:04:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "I wrote:\n> * As for control-C not getting out of it: there is\n> \t\tif (CancelRequested)\n> \t\t\tbreak;\n> in pgbench's loop, but this does nothing in this scenario because\n> fe-utils/cancel.c only sets that flag when it successfully sends a\n> Cancel ... which it certainly cannot if the postmaster is gone.\n> I suspect that this may be relatively recent breakage. It doesn't look\n> too well thought out, in any case. The places that are testing this\n> flag look like they'd rather not be bothered with the fine point of\n> whether the cancel request actually went anywhere.\n\nOn closer inspection, it seems that scripts_parallel.c does have a\ndependency on the cancel request having been sent, because it insists\non collecting a query result off the active connection after detecting\nCancelRequested. This seems dangerously overoptimistic to me; it will\nlock up if for any reason the server doesn't honor the cancel request.\nIt's also pointless, because all the calling apps are just going to close\ntheir connections and exit(1) afterwards, so there's no use in trying to\nresynchronize the connection state. (Plus, disconnectDatabase will\nissue cancels on any busy connections; which would be necessary anyway\nin a parallel operation, since cancel.c could only have signaled one of\nthem.) So the attached patch just removes the useless consumeQueryResult\ncall, and then simplifies select_loop's API a bit.\n\nWith that change, I don't see any place that wants the existing definition\nof CancelRequested rather than the simpler meaning of \"SIGINT was\nreceived\", so I just changed it to mean that. We could certainly also\nhave a variable tracking whether a cancel request was sent, but I see\nno point in one right now.\n\nIt's easiest to test this *without* the other patch -- just run the\npgbench scenario Andres demonstrated, and see whether control-C gets\npgbench to quit cleanly.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 04 Jun 2020 12:29:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-03 18:41:28 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > When libpq is used to COPY data to the server, it doesn't properly\n> > handle errors.\n> > This is partially an old problem, and partially got recently\n> > worse. Before the below commit we detected terminated connections, but\n> > we didn't handle copy failing.\n> \n> Yeah. After poking at that for a little bit, there are at least three\n> problems:\n> \n> * pqSendSome() is responsible not only for pushing out data, but for\n> calling pqReadData in any situation where it can't get rid of the data\n> promptly. 1f39a1c06 overlooked that requirement, and the upshot is\n> that we don't necessarily notice that the connection is broken (it's\n> pqReadData's job to detect that). Putting a pqReadData call into\n> the early-exit path helps, but doesn't fix things completely.\n\nIs that fully necessary? Couldn't we handle at least the case I had by\nlooking at write_failed in additional places?\n\nIt might still be the right thing to continue to call pqReadData() from\npqSendSome(), don't get me wrong.\n\n\n> * The more longstanding problem is that the PQputCopyData code path\n> doesn't have any mechanism for consuming an 'E' (error) message\n> once pqReadData has collected it. AFAICS that's ancient.\n\nYea, I looked back quite a bit, and it looked that way for a long\ntime. I thought for a moment that it might be related to the copy-both\nintroduction, but it wasn't.\n\n\n> I think that the idea was to let the client dump all its copy data and\n> then report the error message when PQendCopy is called, but as you\n> say, that's none too friendly when there's gigabytes of data involved.\n> I'm not sure we can do anything about this without effectively\n> changing the client API for copy errors, though.\n\nHm. Why would it *really* be an API change? Until recently connection\nfailures etc were returned from PQputCopyData(), and it is documented\nthat way:\n\n/*\n * PQputCopyData - send some data to the backend during COPY IN or COPY BOTH\n *\n * Returns 1 if successful, 0 if data could not be sent (only possible\n * in nonblock mode), or -1 if an error occurs.\n */\nint\nPQputCopyData(PGconn *conn, const char *buffer, int nbytes)\n\nSo consuming 'E' when in copy mode doesn't seem like a crazy change to\nme. Probably a bit too big to backpatch though. But given that this\nhasn't been complained about much in however many years...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jun 2020 21:30:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: libpq copy error handling busted"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-03 18:41:28 -0400, Tom Lane wrote:\n>> * pqSendSome() is responsible not only for pushing out data, but for\n>> calling pqReadData in any situation where it can't get rid of the data\n>> promptly. 1f39a1c06 overlooked that requirement, and the upshot is\n>> that we don't necessarily notice that the connection is broken (it's\n>> pqReadData's job to detect that). Putting a pqReadData call into\n>> the early-exit path helps, but doesn't fix things completely.\n\n> Is that fully necessary? Couldn't we handle at least the case I had by\n> looking at write_failed in additional places?\n\nNo doubt there's more than one way to do it, but I like fixing this in\npqSendSome; that's adding symmetry not warts. It's already the case\nthat pqSendSome must absorb input when it's transiently unable to send\n(that has to be true to avoid livelock when TCP buffers are full in both\ndirections). So making it absorb input when it's permanently unable\nto send seems consistent with that. Also, fixing this at outer levels\nwould make it likely that we're not covering as many cases; which was\nessentially the point of 1f39a1c06.\n\n>> I think that the idea was to let the client dump all its copy data and\n>> then report the error message when PQendCopy is called, but as you\n>> say, that's none too friendly when there's gigabytes of data involved.\n>> I'm not sure we can do anything about this without effectively\n>> changing the client API for copy errors, though.\n\n> Hm. Why would it *really* be an API change?\n\nIt'd still conform to the letter of the documentation, sure, but it'd\nnonetheless be a user-visible behavioral change.\n\nIt strikes me that we could instead have the COPY code path \"peek\"\nto see if an 'E' message is waiting in the inBuffer, without actually\nconsuming it, and start failing PQputCopyData calls if so. That would\nbe less of a behavioral change than consuming the message, in the sense\nthat the error would still be available to be reported when PQendcopy is\ncalled.\n\nOn the other hand, that approach assumes that the application will\nindeed call PQendcopy to see what's up, rather than just printing some\nunhelpful \"copy failed\" message and going belly up. pgbench is, um, a\ncounterexample. If we suppose that pgbench is representative of the\nstate of the art in applications, then we'd be better off consuming the\nerror message and reporting it via the notice mechanism. Which would\neffectively mean that the user gets to see it and the application\ndoesn't. On the whole I don't like that, but if we do it the first way\nthen there might be a lot of apps that need upgrades to handle COPY\nfailures nicely. (And on the third hand, those apps *already* need\nupgrades to handle COPY failures nicely, plus right now you have to\nwait till the end of the COPY. So anything would be an improvement.)\n\n> But given that this\n> hasn't been complained about much in however many years...\n\nYeah, it's kind of hard to summon the will to break things when\nthere aren't user complaints. You can bet that somebody will\ncomplain if we change this, in either direction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 12:16:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq copy error handling busted"
}
] |
[
{
"msg_contents": "Hello!\n\nI'd like to propose a simple patch to allow for negative ISO 8601\nintervals with leading minus, e.g. -PT1H besides PT-1H. It seems that\nstandard isn't quite clear on negative duration. However, lots of\nsoftware use leading minus and expect/generate intervals in such forms\nmaking those incompatible with current PostgreSQL decoding code.\n\nAll patch is doing is making a note of a leading minus and negates pg_tm\ncomponents along with fractional seconds. No other behavior change is\nintroduced.\n\n--\nMikhail",
"msg_date": "Wed, 03 Jun 2020 16:31:44 -0500",
"msg_from": "Mikhail Titov <mlt@gmx.us>",
"msg_from_op": true,
"msg_subject": "[PATCH] Leading minus for negative time interval in ISO 8601"
},
{
"msg_contents": "Mikhail Titov <mlt@gmx.us> writes:\n> I'd like to propose a simple patch to allow for negative ISO 8601\n> intervals with leading minus, e.g. -PT1H besides PT-1H. It seems that\n> standard isn't quite clear on negative duration.\n\n\"Isn't quite clear\"? ISTM that if the standard intended to allow that,\nit'd be pretty clear. I looked through the 8601 spec just now, and\nI can't see any indication whatever that they intend to allow \"-\" before P.\nIt's hard to see why they'd bother with that introducer at all if\ndata can appear before it.\n\n> However, lots of\n> software use leading minus and expect/generate intervals in such forms\n> making those incompatible with current PostgreSQL decoding code.\n\nWhich \"lots of software\" are you speaking of, exactly? interval_in\nhas never had such a capability, and I don't recall previous complaints\nabout it.\n\nThe difference between a useful standard and a useless one is the\nextent to which people obey the standard rather than adding random\nextensions to it, so I'm not inclined to add such an extension\nwithout a very well-grounded argument for it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jun 2020 22:46:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Leading minus for negative time interval in ISO 8601"
},
{
"msg_contents": "On 06/03/20 22:46, Tom Lane wrote:\n> \"Isn't quite clear\"? ISTM that if the standard intended to allow that,\n> it'd be pretty clear. I looked through the 8601 spec just now, and\n> I can't see any indication whatever that they intend to allow \"-\" before P.\n\nUmm, did you see any indication that they intend to allow \"-\" /anywhere/\nin a time interval (with the exception of between year and month, month\nand day in the alternate form, as simple delimiters, not as minus?\n\n(Maybe you did; I'm looking at a publicly-accessible 2016 draft.)\n\nIt looks like the whole idea of minusness has to be shoehorned into ISO 8601\nby anyone who misses it, and that's been done different ways. I guess that's\nthe \"isn't quite clear\" part.\n\n> Which \"lots of software\" are you speaking of, exactly? interval_in\n> has never had such a capability, and I don't recall previous complaints\n> about it.\n\nJava durations allow both the PostgreSQL-style minus on individual\ncomponents, and a leading minus that negates the whole thing. [1]\nThat explicitly says \"The leading plus/minus sign, and negative values\nfor other units are not part of the ISO-8601 standard.\"\n\nXML Schema (and therefore XML Query, which uses XML Schema data types)\nallows only the leading minus. [2]\n\nThe XML Schema folks say their concept is \"drawn from those of ISO 8601,\nspecifically durations without fixed endpoints.\" That's why they can get\naway with just the single leading sign: they don't admit something like\nP1M-1D which you don't know to call 27, 28, 29, or 30 days until you're\ngiven an endpoint to hang it on.\n\nI had to deal with that in [3].\n\nRegards,\n-Chap\n\n\n\n\n[1]\nhttps://docs.oracle.com/javase/8/docs/api/java/time/Duration.html#parse-java.lang.CharSequence-\n\n[2] https://www.w3.org/TR/xmlschema11-2/#nt-durationRep\n\n[3]\nhttps://github.com/tada/pljava/blob/master/pljava-examples/src/main/java/org/postgresql/pljava/example/saxon/S9.java#L329\n\n\n",
"msg_date": "Wed, 3 Jun 2020 23:59:39 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Leading minus for negative time interval in ISO 8601"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 06/03/20 22:46, Tom Lane wrote:\n>> \"Isn't quite clear\"? ISTM that if the standard intended to allow that,\n>> it'd be pretty clear. I looked through the 8601 spec just now, and\n>> I can't see any indication whatever that they intend to allow \"-\" before P.\n\n> Umm, did you see any indication that they intend to allow \"-\" /anywhere/\n> in a time interval (with the exception of between year and month, month\n> and day in the alternate form, as simple delimiters, not as minus?\n> (Maybe you did; I'm looking at a publicly-accessible 2016 draft.)\n\nI don't have an \"official\" copy either; I was looking at this draft:\nhttps://www.loc.gov/standards/datetime/iso-tc154-wg5_n0038_iso_wd_8601-1_2016-02-16.pdf\n\nI see this bit:\n\n [±] represents a plus sign [+] if in combination with the following\n element a positive value or zero needs to be represented (in this\n case, unless explicitly stated otherwise, the plus sign shall not be\n omitted), or a minus sign [−] if in combination with the following\n element a negative value needs to be represented.\n\nbut I agree that there's no clear application of that to intervals,\neither overall or per-field.\n\n> Java durations allow both the PostgreSQL-style minus on individual\n> components, and a leading minus that negates the whole thing. [1]\n> That explicitly says \"The leading plus/minus sign, and negative values\n> for other units are not part of the ISO-8601 standard.\"\n> XML Schema (and therefore XML Query, which uses XML Schema data types)\n> allows only the leading minus. [2]\n\nHm. The slippery slope I *don't* want to be drawn down is somebody\narguing that we should change interval_out, because that would open\na whole Pandora's box of compatibility issues. Maybe we should just\ntake the position that negative intervals aren't standardized, and\nif you want to transport them using ISO format then you first need\nto lobby ISO to fix that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jun 2020 00:25:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Leading minus for negative time interval in ISO 8601"
},
{
"msg_contents": "On Wed, Jun 3, 2020 at 9:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ...\n> ISTM that if the standard intended to allow that, it'd be pretty\n> clear. I looked through the 8601 spec just now, and I can't see any\n> indication whatever that they intend to allow \"-\" before P.\n\nTo be fair, I do not have an access to 2019 edition that\nseems to address negative duration, but what I can see from the wording\nat\nhttps://www.loc.gov/standards/datetime/iso-tc154-wg5_n0039_iso_wd_8601-2_2016-02-16.pdf\n, it seems to be written without an idea of negative duration at all,\neven PT-1D alikes supported by PostgreSQL. Also that PDF mentions comma\nas a preferred sign for e.g. PT1,5D that PostgreSQL does not accept. I\nunderstand though that PDF explicitly states it is not a standard.\n\n> It's hard to see why they'd bother with that introducer at all if data\n> can appear before it.\n\nI'm not sure I follow. Do you mean to hard require for time/span to\nstart with P and nothing but that? If so, can we think of it as a\nsyntactic sugar? I.e. unary minus AND a normal, positive duration of\nyour liking that we just negate in-place.\n\n>> However, lots of software use leading minus and expect/generate\n>> intervals in such forms making those incompatible with current\n>> PostgreSQL decoding code.\n>\n> Which \"lots of software\" are you speaking of, exactly? interval_in\n> has never had such a capability, and I don't recall previous complaints\n> about it.\n\nI was not talking about PG-centric software in particular. I had some\nJavaScript libraries, Ruby on Rails, Java, Rust, Go in mind. Here is the\nrelated issue for Rust https://github.com/rust-lang/rust/issues/18181\nand some Go library\nhttps://pkg.go.dev/github.com/rickb777/date/period?tab=doc#Parse (besides\nthe links I gave in the patch) to show examples of accepting minus prefix.\n\nI presume no one complained much previously because offset can be (and\noften is) stored as float in, e.g., seconds, and then offset * '@1\nsecond'::interval. That looks a bit verbose and I'd prefer to keep\noffset as interval and do no extra casting.\n\nTake a look at w3c specs that refer to ISO 8601 as well. I understand,\nthat is not what PG is after, but here is an excerpt:\n\n,----[ https://www.w3.org/TR/xmlschema-2/#duration ]\n| One could also indicate a duration of minus 120 days as: -P120D.\n| ...\n| P-1347M is not allowed although -P1347M is allowed\n`----\n\nNote that the second example explicitly contradicts currently allowed PG\nsyntax. I presume if the standard was clear, there would be no such\nambiguity.\n\nNot that I'm trying to introduce drastic changes, but to make PostgreSQL\nto be somewhat more friendly to what it can accept directly without\ndancing around.\n\n--\nMikhail\n\n\n",
"msg_date": "Wed, 03 Jun 2020 23:27:48 -0500",
"msg_from": "Mikhail Titov <mlt@gmx.us>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Leading minus for negative time interval in ISO 8601"
},
{
"msg_contents": "> ...\n>> Umm, did you see any indication that they intend to allow \"-\" /anywhere/\n>> in a time interval (with the exception of between year and month, month\n>> and day in the alternate form, as simple delimiters, not as minus?\n>> (Maybe you did; I'm looking at a publicly-accessible 2016 draft.)\n>\n> I don't have an \"official\" copy either; I was looking at this draft:\n> https://www.loc.gov/standards/datetime/iso-tc154-wg5_n0038_iso_wd_8601-1_2016-02-16.pdf\n\nheh, no one has an up to date standard :-) Also that is the link I meant\nto include in my first reply. From what I see at\nhttps://www.iso.org/obp/ui/#iso:std:iso:8601:-2:ed-1:v1:en they (ISO) did\naddress negative values for components and also there is \"3.1.1.7\nnegative duration\" that would be nice to read somehow.\n\n> I see this bit:\n>\n> [±] represents a plus sign [+] if in combination with the following\n> element a positive value or zero needs to be represented (in this\n> case, unless explicitly stated otherwise, the plus sign shall not be\n> omitted), or a minus sign [−] if in combination with the following\n> element a negative value needs to be represented.\n\nBut nowhere near duration specification [±] is used whatsoever.\n\n> Hm. The slippery slope I *don't* want to be drawn down is somebody\n> arguing that we should change interval_out, because that would open\n> a whole Pandora's box of compatibility issues. Maybe we should just\n> take the position that negative intervals aren't standardized, and\n> if you want to transport them using ISO format then you first need\n> to lobby ISO to fix that.\n\nI explicitly do NOT want to change anything on the way out. First, that\nis how things are and we do not want to break anything. And, second, in\nmany cases client software can read either format. That is why I thought\nit would be a trivial change. No output changes.\n\n-- \nMikhail\n\n\n",
"msg_date": "Wed, 03 Jun 2020 23:48:55 -0500",
"msg_from": "Mikhail Titov <mlt@gmx.us>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Leading minus for negative time interval in ISO 8601"
},
{
"msg_contents": "On Wed, Jun 3, 2020 at 11:25 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ...\n> Maybe we should just take the position that negative intervals aren't\n> standardized, and if you want to transport them using ISO format then\n> you first need to lobby ISO to fix that.\n\nApparently ISO did \"fix\" this. I managed to get a copy of ISO\n8601-2:2019(E) and I insist on reconsidering the patch. Here is an\nexcerpt from page 12 of the standard:\n\n,----[ 4.4.1.9 Duration ]\n| A duration in the reverse direction, called a \"negative duration\" in\n| this document, can be expressed using the following representation based\n| on the [duration] representation specified in ISO 8601-1:2019, 5.5.2. In\n| this case, all time scale components within the duration representation\n| shall be positive.\n|\n| duration(m) = [!][\"-\"][positiveDuration]\n|\n| where [positiveDuration] is the representation of a positive duration.\n|\n| EXAMPLE 1 '-P1000' in date represents the duration of 100 days in the\n| reverse direction. The duration formula 'P3650 - PlOOO' results in\n| 'P2650'.\n|\n| EXAMPLE 2 '-P1Y30' in date represents the duration of one year and three\n| days in the reverse direction. The duration formula 'PSY60 - P1Y30'\n| results in 'P4Y30'.\n`----\n\nNote (mine) exclamation sign [!] means the following is optional. Here\nis the definition for positiveDuration:\n\n,----[ positiveDuration ]\n| representation of [duration] specified in ISO 8601-1:2019, 5.4.2 that\n| contains only time scale components that have positive values\n`----\n\nHowever on page 41 the standard says:\n\n,----[ 11.2 Durational units ]\n| Individual duration units are allowed to have negative values. The\n| following representation denoted as [durationalUnits(m)] accept negative values per component.\n|\n| durationUnits(m) = [yearE(m)][monthE(m)][weekE(m)][dayE(m))[\"T\"][hourE(m)][minuteE(m)]\n| (secondE(m)]\n`----\n\nAnd, finally, there is that\n\n,----[ 11.3.2 Composite representation ]\n| The composite representation of a duration is a more flexible and\n| relaxed specification for duration than that of .ISO 8601-1:2019,\n| 5.5.2. It accepts all expressions of the duration representation given\n| in ISO 8601-1:2019, 5.5.2 and is given as follows.\n|\n| [!][\"-\"][\"P\"][ durationUnits(m)]\n|\n| where [durationUnits(m)] contains time scale components for expressing\n| (positive or negative) duration (see 11.2).\n|\n| Expressions in the two examples below are valid in ISO 8601-1.\n|\n| EXAMPLE 1 'P3D', duration of three days.\n| EXAMPLE 2 'P180Y800D', duration of one-hundred-and-eighty years and eight-hundred days.\n|\n| Expressions in the following four examples below are not valid in ISO\n| 8601-1, but are valid as specified in this clause.\n|\n| EXAMPLE 3 'P3W2D', duration of three weeks and two days, which is 23 days (equivalent to the expression\n| 'P23D'). In ISO 8601-1, [\"W\"] is not permitted to occur along with any other component.\n| EXAMPLE 4 'PSYlOW', duration of five years and ten weeks.\n| EXAMPLE 5 'P-3M-3DT1HSM', duration of three months and three days in the reverse direction, with one hour\n| and five minutes in the original direction.\n| EXAMPLE 6 'P-ZM-1D', duration in the reverse direction of two months and one day.\n|\n| When a minus sign is provided as prefix to the duration designator\n| [\"P\"], the minus sign can be internalized into individual time scale\n| components within the duration expression by applying to every time\n| scale component within.\n|\n| EXAMPLE 7 '-P2M1D' is equivalent to 'P-2M-1D'.\n| EXAMPLE 8 '-P5DT10H' is equivalent to 'P-5DT-10H'.\n|\n| When a minus sign is applied to a time scale component whose value is\n| already negative (pointing to the reverse direction), it means that\n| the direction of duration should be once again reversed and should be\n| turned into a positive value.\n|\n| EXAMPLE 9 '-P8M-1D', duration in reverse, \"eight months minus one day\", is equivalent to 'P-8M1D', \"eight\n| months ago with a day ahead\".\n| EXAMPLE 10 '-P-5WT-18H30M', duration in reverse, \"go back five weeks, eighteen hours but thirty minutes\n| ahead\", is equivalent to 'P5WT18H-30M', \"go ahead five weeks, eighteen hours, but thirty minutes back\".\n|\n| NOTE The exact duration for some time scale components can be known only when placed on the actual\n| time scale, see D.2.\n`----\n\nOn a side note, it also defines (4.4.2) exponential values, but I guess\nwe can pass on those for now.\n\n--\nMikhail\n\n\n",
"msg_date": "Tue, 09 Jun 2020 23:18:20 -0500",
"msg_from": "Mikhail Titov <mlt@gmx.us>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Leading minus for negative time interval in ISO 8601"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 11:18:20PM -0500, Mikhail Titov wrote:\n> On Wed, Jun 3, 2020 at 11:25 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > ...\n> > Maybe we should just take the position that negative intervals aren't\n> > standardized, and if you want to transport them using ISO format then\n> > you first need to lobby ISO to fix that.\n> \n> Apparently ISO did \"fix\" this. I managed to get a copy of ISO\n> 8601-2:2019(E) and I insist on reconsidering the patch. Here is an\n> excerpt from page 12 of the standard:\n\nThis shows the problem of trying to honor a standard which is not\npublicly available.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 11 Jun 2020 18:51:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Leading minus for negative time interval in ISO 8601"
}
] |
[
{
"msg_contents": "Greetings,\n\nOur system uses an EAV like database and generates queries like the example below.\n\nAs you could see the query includes castings, we noticed testing with Postgres 12 that the castings of the CASE THEN statement (commented out below) where failing in some cases, of course if you do the INNER JOIN and CASE WHEN first our expectation is that the value can be casted.\n\nChanging INNER JOIN to LEFT JOIN solved the issue in Postgres 12, testing with earlier versions of Postgres INNER JOIN worked perfectly. \n\nHas somebody already reported anything like this? Maybe an issue with some optimisation?\n\nBest,\nJuan\n\nExample query:\n\nSELECT DISTINCT t0.id\nFROM samples t0\nINNER JOIN sample_properties t1 ON t0.id = t1.samp_id\nINNER JOIN sample_type_property_types t2 ON t1.stpt_id = t2.id\nINNER JOIN property_types t3 ON t2.prty_id = t3.id\nINNER JOIN data_types t4 ON t3.daty_id = t4.id\nLEFT JOIN controlled_vocabulary_terms t5 ON t1.cvte_id = t5.id\nLEFT JOIN materials t6 ON t1.mate_prop_id = t6.id\nINNER JOIN sample_properties t7 ON t0.id = t7.samp_id\nINNER JOIN sample_type_property_types t8 ON t7.stpt_id = t8.id\nINNER JOIN property_types t9 ON t8.prty_id = t9.id\nINNER JOIN data_types t10 ON t9.daty_id = t10.id\nLEFT JOIN controlled_vocabulary_terms t11 ON t7.cvte_id = t11.id\nLEFT JOIN materials t12 ON t7.mate_prop_id = t12.id\nINNER JOIN sample_properties t13 ON t0.id = t13.samp_id\nINNER JOIN sample_type_property_types t14 ON t13.stpt_id = t14.id\nINNER JOIN property_types t15 ON t14.prty_id = t15.id\nINNER JOIN data_types t16 ON t15.daty_id = t16.id\nLEFT JOIN controlled_vocabulary_terms t17 ON t13.cvte_id = t17.id\nLEFT JOIN materials t18 ON t13.mate_prop_id = t18.id\nINNER JOIN sample_properties t19 ON t0.id = t19.samp_id\nINNER JOIN sample_type_property_types t20 ON t19.stpt_id = t20.id\nINNER JOIN property_types t21 ON t20.prty_id = t21.id\nINNER JOIN data_types t22 ON t21.daty_id = t22.id\nLEFT JOIN controlled_vocabulary_terms t23 ON t19.cvte_id = t23.id\nLEFT JOIN materials t24 ON t19.mate_prop_id = t24.id\nINNER JOIN sample_properties t25 ON t0.id = t25.samp_id\nINNER JOIN sample_type_property_types t26 ON t25.stpt_id = t26.id\nINNER JOIN property_types t27 ON t26.prty_id = t27.id\nINNER JOIN data_types t28 ON t27.daty_id = t28.id\nLEFT JOIN controlled_vocabulary_terms t29 ON t25.cvte_id = t29.id\nLEFT JOIN materials t30 ON t25.mate_prop_id = t30.id\nWHERE t0.saty_id IN (SELECT unnest(ARRAY[5])) AND t3.is_internal_namespace = true \n AND t3.code = 'STORAGE_POSITION.STORAGE_CODE' \n AND (lower(t1.value) = 'default_storage' OR lower(t5.code) = 'default_storage' OR lower(t6.code) = 'default_storage')\n AND t7.stpt_id = (SELECT id FROM sample_type_property_types WHERE saty_id = 5 AND prty_id = (SELECT id FROM property_types WHERE is_internal_namespace = true AND code = 'STORAGE_POSITION.STORAGE_RACK_ROW'))\n AND t7.value::numeric = 1\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t \n-- AND \n-- CASE WHEN t9.is_internal_namespace = true \n-- AND t9.code = 'STORAGE_POSITION.STORAGE_RACK_ROW' \n-- AND (t10.code = 'INTEGER' OR t10.code = 'REAL') \n-- THEN t7.value::numeric = 1 \n-- ELSE false \n-- END \n AND t13.stpt_id = (SELECT id FROM sample_type_property_types WHERE saty_id = 5 AND prty_id = (SELECT id FROM property_types WHERE is_internal_namespace = true AND code = 'STORAGE_POSITION.STORAGE_RACK_COLUMN'))\n AND t13.value::numeric = 2 \n-- AND \n-- CASE WHEN t15.is_internal_namespace = true \n-- AND t15.code = 'STORAGE_POSITION.STORAGE_RACK_COLUMN' \n-- AND (t16.code = 'INTEGER' OR t16.code = 'REAL') \n-- THEN t13.value::numeric = 2 \n-- ELSE false \n-- END \n AND t21.is_internal_namespace = true \n AND t21.code = 'STORAGE_POSITION.STORAGE_BOX_NAME' \n AND (lower(t19.value) = 'box2' OR lower(t23.code) = 'box2' OR lower(t24.code) = 'box2') \n AND t27.is_internal_namespace = true \n AND t27.code = 'STORAGE_POSITION.STORAGE_BOX_POSITION' \n AND (t25.value ILIKE '%a3%' OR t29.code ILIKE '%a3%' OR t30.code ILIKE '%a3%');\n\n",
"msg_date": "Thu, 4 Jun 2020 09:46:30 +0200",
"msg_from": "Juan Fuentes <juanmarianofuentes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Possible bug on Postgres 12 (CASE THEN evaluated prematurely) -\n Change of behaviour compared to 11, 10, 9"
},
{
"msg_contents": "Juan Fuentes <juanmarianofuentes@gmail.com> writes:\n> As you could see the query includes castings, we noticed testing with Postgres 12 that the castings of the CASE THEN statement (commented out below) where failing in some cases, of course if you do the INNER JOIN and CASE WHEN first our expectation is that the value can be casted.\n\nYou're unlikely to get any useful comments on this if you don't provide\na self-contained example. The query by itself lacks too many details.\nAs an example, one way \"t7.value::numeric = 1\" could fail despite being\ninside a CASE is if t7 is a view whose \"value\" column is actually a\nconstant. Flattening of the view would replace \"t7.value\" with that\nconstant, and then constant-folding would cause the failure, and neither\nof those things are prevented by a CASE. I kind of doubt that that's\nthe specific issue here, but I'm not going to guess at what is in your\nthirty-some input tables.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jun 2020 10:26:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug on Postgres 12 (CASE THEN evaluated prematurely) -\n Change of behaviour compared to 11, 10, 9"
},
{
"msg_contents": "Thanks Tom!\n\nI was just hopping somebody could point out if this kind of issue has been reported before spending 2 days fabricating a simpler self contained example.\n\nBest,\nJuan\n\n> On 4 Jun 2020, at 16:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Juan Fuentes <juanmarianofuentes@gmail.com> writes:\n>> As you could see the query includes castings, we noticed testing with Postgres 12 that the castings of the CASE THEN statement (commented out below) where failing in some cases, of course if you do the INNER JOIN and CASE WHEN first our expectation is that the value can be casted.\n> \n> You're unlikely to get any useful comments on this if you don't provide\n> a self-contained example. The query by itself lacks too many details.\n> As an example, one way \"t7.value::numeric = 1\" could fail despite being\n> inside a CASE is if t7 is a view whose \"value\" column is actually a\n> constant. Flattening of the view would replace \"t7.value\" with that\n> constant, and then constant-folding would cause the failure, and neither\n> of those things are prevented by a CASE. I kind of doubt that that's\n> the specific issue here, but I'm not going to guess at what is in your\n> thirty-some input tables.\n> \n> \t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Thu, 4 Jun 2020 17:20:43 +0200",
"msg_from": "Juan Fuentes <juanmarianofuentes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible bug on Postgres 12 (CASE THEN evaluated prematurely) -\n Change of behaviour compared to 11, 10, 9"
}
] |
[
{
"msg_contents": "Hi ,\n\nWhat is the right approach for using AT TIME ZONE function?\n\nOption 1: <some_date with tz> AT TIME ZONE 'IST'\nOption 2: <some_date with tz> AT TIME ZONE 'Asia/Kolkata'\n\nIn the first option, I get +2:00:00 offset (when *timezone_abbrevations =\n'Default'*) and for option 2 , +5:30 offset.\n\nI can see multiple entries for IST in pg_timezone_names with\ndifferent utc_offset, but in pg_timezone_abbrev there is one entry. I guess\nAT TIME ZONE function using the offset shown in pg_timezone_abbrev.\n\novdb=> select * from pg_timezone_names where abbrev = 'IST';\nname | abbrev | utc_offset | is_dst\n---------------------+--------+------------+--------\n Asia/Calcutta | IST | 05:30:00 | f\n Asia/Kolkata | IST | 05:30:00 | f\n Europe/Dublin | IST | 01:00:00 | t\n posix/Asia/Calcutta | IST | 05:30:00 | f\n posix/Asia/Kolkata | IST | 05:30:00 | f\n posix/Europe/Dublin | IST | 01:00:00 | t\n posix/Eire | IST | 01:00:00 | t\n Eire | IST | 01:00:00 | t\n\novdb=> select * from pg_timezone_abbrevs where abbrev = 'IST';\n abbrev | utc_offset | is_dst\n--------+------------+--------\n IST | 02:00:00 | f\n\nIn my system, we receive TZ in abbrev format (3 character, like EST, PST\n...).\n\nI have tried changing the timezone_abbrevations = 'India', then it worked\nfine (IST is giving +5:30 offset)\n\nSo,\n What is recommended, use name instead of abbrev in TZ conversion\nfunction?\n Or\n Change the timezone_abbrevations to 'India'?\n\n*Regards,*\n*Rajin *\n\nHi ,What is the right approach for using AT TIME ZONE function?Option 1: <some_date with tz> AT TIME ZONE 'IST' Option 2: <some_date with tz> AT TIME ZONE 'Asia/Kolkata' In the first option, I get +2:00:00 offset (when timezone_abbrevations = 'Default') and for option 2 , +5:30 offset. I can see multiple entries for IST in pg_timezone_names with different utc_offset, but in pg_timezone_abbrev there is one entry. I guess AT TIME ZONE function using the offset shown in pg_timezone_abbrev. ovdb=> select * from pg_timezone_names where abbrev = 'IST';name | abbrev | utc_offset | is_dst---------------------+--------+------------+-------- Asia/Calcutta | IST | 05:30:00 | f Asia/Kolkata | IST | 05:30:00 | f Europe/Dublin | IST | 01:00:00 | t posix/Asia/Calcutta | IST | 05:30:00 | f posix/Asia/Kolkata | IST | 05:30:00 | f posix/Europe/Dublin | IST | 01:00:00 | t posix/Eire | IST | 01:00:00 | t Eire | IST | 01:00:00 | tovdb=> select * from pg_timezone_abbrevs where abbrev = 'IST'; abbrev | utc_offset | is_dst--------+------------+-------- IST | 02:00:00 | fIn my system, we receive TZ in abbrev format (3 character, like EST, PST ...). I have tried changing the timezone_abbrevations = 'India', then it worked fine (IST is giving +5:30 offset)So, What is recommended, use name instead of abbrev in TZ conversion function? Or Change the \n\ntimezone_abbrevations to 'India'? Regards,Rajin",
"msg_date": "Thu, 4 Jun 2020 15:15:01 +0530",
"msg_from": "Rajin Raj <rajin.raj@opsveda.com>",
"msg_from_op": true,
"msg_subject": "Regarding TZ conversion"
},
{
"msg_contents": "Rajin Raj <rajin.raj@opsveda.com> writes:\n> Option 1: <some_date with tz> AT TIME ZONE 'IST'\n> Option 2: <some_date with tz> AT TIME ZONE 'Asia/Kolkata'\n> In the first option, I get +2:00:00 offset (when *timezone_abbrevations =\n> 'Default'*) and for option 2 , +5:30 offset.\n\n> I can see multiple entries for IST in pg_timezone_names with\n> different utc_offset, but in pg_timezone_abbrev there is one entry. I guess\n> AT TIME ZONE function using the offset shown in pg_timezone_abbrev.\n\nNo. If you use an abbreviation rather than a spelled-out zone name,\nyou get whatever the timezone_abbrevations file says, which by default\nis\n\n$ grep IST .../postgresql/share/timezonesets/Default \n# CONFLICT! IST is not unique\n# - IST: Irish Standard Time (Europe)\n# - IST: Indian Standard Time (Asia)\nIST 7200 # Israel Standard Time\n\nIf that's not what you want, change it. See\n\nhttps://www.postgresql.org/docs/current/datetime-config-files.html\n\nand also\n\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jun 2020 09:53:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regarding TZ conversion"
},
{
"msg_contents": "Thanks for the clarification.\n\nIs it advisable to modify the Default? Will it override when we apply a\npatch or upgrade the DB?\n\nWhat about creating a new file like below and update the postgres.conf with\nthe new name.\n\n# New tz offset\n @INCLUDE Default\n\n @OVERRDIE\n IST 19800\n ........................\n\n\n*Regards,*\n*Rajin *\n\n\nOn Thu, Jun 4, 2020 at 7:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Rajin Raj <rajin.raj@opsveda.com> writes:\n> > Option 1: <some_date with tz> AT TIME ZONE 'IST'\n> > Option 2: <some_date with tz> AT TIME ZONE 'Asia/Kolkata'\n> > In the first option, I get +2:00:00 offset (when *timezone_abbrevations =\n> > 'Default'*) and for option 2 , +5:30 offset.\n>\n> > I can see multiple entries for IST in pg_timezone_names with\n> > different utc_offset, but in pg_timezone_abbrev there is one entry. I\n> guess\n> > AT TIME ZONE function using the offset shown in pg_timezone_abbrev.\n>\n> No. If you use an abbreviation rather than a spelled-out zone name,\n> you get whatever the timezone_abbrevations file says, which by default\n> is\n>\n> $ grep IST .../postgresql/share/timezonesets/Default\n> # CONFLICT! IST is not unique\n> # - IST: Irish Standard Time (Europe)\n> # - IST: Indian Standard Time (Asia)\n> IST 7200 # Israel Standard Time\n>\n> If that's not what you want, change it. See\n>\n> https://www.postgresql.org/docs/current/datetime-config-files.html\n>\n> and also\n>\n>\n> https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES\n>\n> regards, tom lane\n>\n\nThanks for the clarification. Is it advisable to modify the Default? Will it override when we apply a patch or upgrade the DB? What about creating a new file like below and update the postgres.conf with the new name.# New tz offset @INCLUDE Default @OVERRDIE IST 19800 ........................Regards,Rajin On Thu, Jun 4, 2020 at 7:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Rajin Raj <rajin.raj@opsveda.com> writes:\n> Option 1: <some_date with tz> AT TIME ZONE 'IST'\n> Option 2: <some_date with tz> AT TIME ZONE 'Asia/Kolkata'\n> In the first option, I get +2:00:00 offset (when *timezone_abbrevations =\n> 'Default'*) and for option 2 , +5:30 offset.\n\n> I can see multiple entries for IST in pg_timezone_names with\n> different utc_offset, but in pg_timezone_abbrev there is one entry. I guess\n> AT TIME ZONE function using the offset shown in pg_timezone_abbrev.\n\nNo. If you use an abbreviation rather than a spelled-out zone name,\nyou get whatever the timezone_abbrevations file says, which by default\nis\n\n$ grep IST .../postgresql/share/timezonesets/Default \n# CONFLICT! IST is not unique\n# - IST: Irish Standard Time (Europe)\n# - IST: Indian Standard Time (Asia)\nIST 7200 # Israel Standard Time\n\nIf that's not what you want, change it. See\n\nhttps://www.postgresql.org/docs/current/datetime-config-files.html\n\nand also\n\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES\n\n regards, tom lane",
"msg_date": "Thu, 4 Jun 2020 23:57:53 +0530",
"msg_from": "Rajin Raj <rajin.raj@opsveda.com>",
"msg_from_op": true,
"msg_subject": "Re: Regarding TZ conversion"
}
] |
[
{
"msg_contents": "Hello!On postgres 12.3 the problem still exists (https://www.postgresql.org/message-id/16446-2011a4b103fc5fd1%40postgresql.org): (Tested on PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit) Bug reference: 16446Logged by: Георгий ДракEmail address: sonicgd(at)gmail(dot)comPostgreSQL version: 12.2Operating system: Debian 10.3Description: Hello. I'm catch error \"virtual tuple table slot does not have systemattributes\" when inserting row into partitioned table with RETURNING xmin; Reproduction. 1. Create schemaCREATE TABLE \"tmp\"( id bigint generated always as identity, date timestamptz not null, foo int not null, PRIMARY KEY (\"id\", \"date\")) PARTITION BY RANGE (\"date\");CREATE TABLE \"tmp_2020\" PARTITION OF \"tmp\" FOR VALUES FROM ('2020-01-01') TO('2021-01-01'); 2. Execute queryINSERT INTO \"tmp\" (\"date\", \"foo\")VALUES (NOW(), 1)RETURNING id, xmin; 3. Result - ERROR: virtual tuple table slot does not have systemattributes 4. Expected result - id and xmin of inserted row. ",
"msg_date": "Thu, 04 Jun 2020 13:50:07 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Pavel Biryukov <79166341370@yandex.ru> writes:\n> Hello. I'm catch error \"virtual tuple table slot does not have system\n> attributes\" when inserting row into partitioned table with RETURNING\n> xmin\n\nReproduced here. The example works in v11, and in v10 if you remove\nthe unnecessary-to-the-example primary key, so it seems like a clear\nregression. I didn't dig for the cause but I suppose it's related\nto Andres' slot-related changes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jun 2020 12:57:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On Fri, 5 Jun 2020 at 04:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Reproduced here. The example works in v11, and in v10 if you remove\n> the unnecessary-to-the-example primary key, so it seems like a clear\n> regression. I didn't dig for the cause but I suppose it's related\n> to Andres' slot-related changes.\n\nLooks like c2fe139c20 is the breaking commit.\n\nDavid\n\n\n",
"msg_date": "Fri, 5 Jun 2020 12:18:28 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hello! Is it going to be fixed? This problem stops us from migrating to 12...-- С уважением, Павел 05.06.2020, 03:18, \"David Rowley\" <dgrowleyml@gmail.com>:On Fri, 5 Jun 2020 at 04:57, Tom Lane <tgl@sss.pgh.pa.us> wrote: Reproduced here. The example works in v11, and in v10 if you remove the unnecessary-to-the-example primary key, so it seems like a clear regression. I didn't dig for the cause but I suppose it's related to Andres' slot-related changes.Looks like c2fe139c20 is the breaking commit.David\n",
"msg_date": "Tue, 30 Jun 2020 10:09:13 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hello,\n\nccing pgsql-hackers@postgresql.org\n\nUpon investigation, it seems that the problem is caused by the\nfollowing:\n\nThe slot passed to the call to ExecProcessReturning() inside\nExecInsert() is often a virtual tuple table slot. If there are system\ncolumns other than ctid and tableOid referenced in the RETURNING clause\n(not only xmin as in the bug report), it will lead to the ERROR as\nmentioned in this thread as virtual tuple table slots don't really store\nsuch columns. (ctid and tableOid are present directly in the\nTupleTableSlot struct and can be satisfied from there: refer:\nslot_getsysattr()))\n\nI have attached two alternate patches to solve the problem. Both patches\nuse and share a mechanism to detect if there are any such system\ncolumns. This is done inside ExecBuildProjectionInfo() and we store this\ninfo inside the ProjectionInfo struct. Then based on this info, system\ncolumns are populated in a suitable slot, which is then passed on to\nExecProcessReturning(). (If it is deemed that this operation only be\ndone for RETURNING, we can just as easily do it in the callsite for\nExecBuildProjectionInfo() in ExecInitModifyTable() for RETURNING instead\nof doing it inside ExecBuildProjectionInfo())\n\nThe first patch [1] explicitly creates a heap tuple table slot, fills in the\nsystem column values as we would do during heap_prepare_insert() and\nthen passes that slot to ExecProcessReturning(). (We use a heap tuple table\nslot as it is guaranteed to support these attributes).\n\nThe second patch [2] instead of relying on a heap tuple table slot,\nrelies on ExecGetReturningSlot() for the right slot and\ntable_tuple_fetch_row_version() to supply the system column values. It\ndoes make the assumption that the AM would supply a slot that will have\nthese system columns.\n\n[1] v1-0001-Explicitly-supply-system-columns-for-INSERT.RETUR.patch\n[2] v1-0001-Use-table_tuple_fetch_row_version-to-supply-INSER.patch\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Mon, 6 Jul 2020 15:45:23 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi Soumyadeep,\n\nThanks for picking this up.\n\nOn Tue, Jul 7, 2020 at 7:46 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> Upon investigation, it seems that the problem is caused by the\n> following:\n>\n> The slot passed to the call to ExecProcessReturning() inside\n> ExecInsert() is often a virtual tuple table slot.\n\nActually, not that often in practice. The slot is not virtual, for\nexample, when inserting into a regular non-partitioned table. Whether\nor not it is virtual depends on the following piece of code in\nExecInitModifyTable():\n\n mtstate->mt_scans[i] =\n ExecInitExtraTupleSlot(mtstate->ps.state,\nExecGetResultType(mtstate->mt_plans[i]),\n\ntable_slot_callbacks(resultRelInfo->ri_RelationDesc));\n\nSpecifically, the call to table_slot_callbacks() above determines what\nkind of slot is assigned for a given target relation. For partitioned\ntables, it happens to return a virtual slot currently, per this\nimplementation:\n\n if (relation->rd_tableam)\n tts_cb = relation->rd_tableam->slot_callbacks(relation);\n else if (relation->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n {\n /*\n * Historically FDWs expect to store heap tuples in slots. Continue\n * handing them one, to make it less painful to adapt FDWs to new\n * versions. The cost of a heap slot over a virtual slot is pretty\n * small.\n */\n tts_cb = &TTSOpsHeapTuple;\n }\n else\n {\n /*\n * These need to be supported, as some parts of the code (like COPY)\n * need to create slots for such relations too. It seems better to\n * centralize the knowledge that a heap slot is the right thing in\n * that case here.\n */\n Assert(relation->rd_rel->relkind == RELKIND_VIEW ||\n relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE);\n tts_cb = &TTSOpsVirtual;\n }\n\nIf I change this to return a \"heap\" slot for partitioned tables, just\nlike for foreign tables, the problem goes away (see the attached). In\nfact, even make check-world passes, so I don't know why it isn't that\nway to begin with.\n\n> I have attached two alternate patches to solve the problem.\n\nIMHO, they are solving the problem at the wrong place. We should\nreally fix things so that the slot that gets passed down to\nExecProcessReturning() is of the correct type to begin with. We could\ndo what I suggest above or maybe find some other way.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 7 Jul 2020 23:18:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi Amit,\n\nThanks for your reply!\n\nOn Tue, Jul 7, 2020 at 7:18 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Soumyadeep,\n>\n> Thanks for picking this up.\n>\n> On Tue, Jul 7, 2020 at 7:46 AM Soumyadeep Chakraborty\n> <soumyadeep2007@gmail.com> wrote:\n> > Upon investigation, it seems that the problem is caused by the\n> > following:\n> >\n> > The slot passed to the call to ExecProcessReturning() inside\n> > ExecInsert() is often a virtual tuple table slot.\n>\n> Actually, not that often in practice. The slot is not virtual, for\n> example, when inserting into a regular non-partitioned table.\n\nIndeed! I meant partitioned tables are a common use case. Sorry, I\nshould have elaborated.\n\n> If I change this to return a \"heap\" slot for partitioned tables, just\n> like for foreign tables, the problem goes away (see the attached). In\n> fact, even make check-world passes, so I don't know why it isn't that\n> way to begin with.\n>\n\nThis is what I had thought of initially but I had taken a step back for 2\nreasons:\n\n1. It is not mandatory for an AM to supply a heap tuple in the slot\npassed to any AM routine. With your patch, the slot passed to\ntable_tuple_insert() inside ExecInsert() for instance is now expected to\nsupply a heap tuple for the subsequent call to ExecProcessReturning().\nThis can lead to garbage values for xmin, xmax, cmin and cmax. I tried\napplying your patch on Zedstore [1], a columnar AM, and consequently, I\ngot incorrect values for xmin, xmax etc with the query reported in this\nissue.\n\n2. This is a secondary aspect but I will mention it here for\ncompleteness. Not knowing enough about this code: will demanding heap\ntuples for partitioned tables all throughout the code have a performance\nimpact? At a first glance it didn't seem to be the case. However, I did\nfind lots of callsites for partitioning or otherwise where we kind of\nexpect a virtual tuple table slot (as evidenced with the calls to\nExecStoreVirtualTuple()). With your patch, we seem to be calling\nExecStoreVirtualTuple() on a heap tuple table slot, in various places:\nsuch as inside execute_attr_map_slot(). It seems to be harmless to do so\nhowever, in accordance with my limited investigation.\n\nAll in all, I think we have to explicitly supply those system columns. I\nheard from Daniel that one of the motivations for having table AMs\nwas to ensure that transaction meta-data storage is not demanded off any\nAM.\n\n[1] https://github.com/greenplum-db/postgres/tree/zedstore\n\nRegards,\nSoumyadeep\n\n\n",
"msg_date": "Tue, 7 Jul 2020 17:37:17 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi Soumyadeep,\n\nOn Wed, Jul 8, 2020 at 9:37 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> On Tue, Jul 7, 2020 at 7:18 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > If I change this to return a \"heap\" slot for partitioned tables, just\n> > like for foreign tables, the problem goes away (see the attached). In\n> > fact, even make check-world passes, so I don't know why it isn't that\n> > way to begin with.\n>\n> This is what I had thought of initially but I had taken a step back for 2\n> reasons:\n>\n> 1. It is not mandatory for an AM to supply a heap tuple in the slot\n> passed to any AM routine. With your patch, the slot passed to\n> table_tuple_insert() inside ExecInsert() for instance is now expected to\n> supply a heap tuple for the subsequent call to ExecProcessReturning().\n> This can lead to garbage values for xmin, xmax, cmin and cmax. I tried\n> applying your patch on Zedstore [1], a columnar AM, and consequently, I\n> got incorrect values for xmin, xmax etc with the query reported in this\n> issue.\n\nAh, I see. You might've noticed that ExecInsert() only ever sees leaf\npartitions, because tuple routing would've switched the result\nrelation to a leaf partition by the time we are in ExecInsert(). So,\ntable_tuple_insert() always refers to a leaf partition's AM. Not\nsure if you've also noticed but each leaf partition gets to own a slot\n(PartitionRoutingInfo.pi_PartitionTupleSlot), but currently it is only\nused if the leaf partition attribute numbers are not the same as the\nroot partitioned table. How about we also use it if the leaf\npartition AM's table_slot_callbacks() differs from the root\npartitioned table's slot's tts_ops? That would be the case, for\nexample, if the leaf partition is of Zedstore AM. In the more common\ncases where all leaf partitions are of heap AM, this would mean the\noriginal slot would be used as is, that is, if we accept hard-coding\ntable_slot_callbacks() to return a \"heap\" slot for partitioned tables\nas I suggest.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Jul 2020 11:16:54 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hey Amit,\n\nOn Tue, Jul 7, 2020 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Ah, I see. You might've noticed that ExecInsert() only ever sees leaf\n> partitions, because tuple routing would've switched the result\n> relation to a leaf partition by the time we are in ExecInsert(). So,\n> table_tuple_insert() always refers to a leaf partition's AM. Not\n> sure if you've also noticed but each leaf partition gets to own a slot\n> (PartitionRoutingInfo.pi_PartitionTupleSlot), but currently it is only\n> used if the leaf partition attribute numbers are not the same as the\n> root partitioned table. How about we also use it if the leaf\n> partition AM's table_slot_callbacks() differs from the root\n> partitioned table's slot's tts_ops? That would be the case, for\n> example, if the leaf partition is of Zedstore AM. In the more common\n> cases where all leaf partitions are of heap AM, this would mean the\n> original slot would be used as is, that is, if we accept hard-coding\n> table_slot_callbacks() to return a \"heap\" slot for partitioned tables\n> as I suggest.\n\nEven then, we will still need to fill in the system columns explicitly as\npi_PartitionTupleSlot will not be filled with system columns after\nit comes back out of table_tuple_insert() if we have a non-heap AM.\n\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Wed, 8 Jul 2020 09:52:31 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On Thu, Jul 9, 2020 at 1:53 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> On Tue, Jul 7, 2020 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Ah, I see. You might've noticed that ExecInsert() only ever sees leaf\n> > partitions, because tuple routing would've switched the result\n> > relation to a leaf partition by the time we are in ExecInsert(). So,\n> > table_tuple_insert() always refers to a leaf partition's AM. Not\n> > sure if you've also noticed but each leaf partition gets to own a slot\n> > (PartitionRoutingInfo.pi_PartitionTupleSlot), but currently it is only\n> > used if the leaf partition attribute numbers are not the same as the\n> > root partitioned table. How about we also use it if the leaf\n> > partition AM's table_slot_callbacks() differs from the root\n> > partitioned table's slot's tts_ops? That would be the case, for\n> > example, if the leaf partition is of Zedstore AM. In the more common\n> > cases where all leaf partitions are of heap AM, this would mean the\n> > original slot would be used as is, that is, if we accept hard-coding\n> > table_slot_callbacks() to return a \"heap\" slot for partitioned tables\n> > as I suggest.\n>\n> Even then, we will still need to fill in the system columns explicitly as\n> pi_PartitionTupleSlot will not be filled with system columns after\n> it comes back out of table_tuple_insert() if we have a non-heap AM.\n\nWell, I was hoping that table_tuple_insert() would fill that info, but\nyou did say upthread that table AMs are not exactly expected to do so,\nso maybe you have a point.\n\nBy the way, what happens today if you do INSERT INTO a_zedstore_table\n... RETURNING xmin? Do you get an error \"xmin is unrecognized\" or\nsome such in slot_getsysattr() when trying to project the RETURNING\nlist?\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jul 2020 16:16:15 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hey Amit,\n\nOn Thu, Jul 9, 2020 at 12:16 AM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> By the way, what happens today if you do INSERT INTO a_zedstore_table\n> ... RETURNING xmin? Do you get an error \"xmin is unrecognized\" or\n> some such in slot_getsysattr() when trying to project the RETURNING\n> list?\n>\n\nWe get garbage values for xmin and cmin. If we request cmax/xmax, we get\nan ERROR from slot_getsystattr()->tts_zedstore_getsysattr():\n\"zedstore tuple table slot does not have system attributes (except xmin\nand cmin)\"\n\nA ZedstoreTupleTableSlot only stores xmin and xmax. Also,\nzedstoream_insert(), which is the tuple_insert() implementation, does\nnot supply the xmin/cmin, thus making those values garbage.\n\nFor context, Zedstore has its own UNDO log implementation to act as\nstorage for transaction information. (which is intended to be replaced\nwith the upstream UNDO log in the future).\n\nThe above behavior is not just restricted to INSERT..RETURNING, right\nnow. If we do a select <tx_column> from foo in Zedstore, the behavior is\nthe same. The transaction information is never returned from Zedstore\nin tableam calls that don't demand transactional information be\nused/returned. If you ask it to do a tuple_satisfies_snapshot(), OTOH,\nit will use the transactional information correctly. It will also\npopulate TM_FailureData, which contains xmax and cmax, in the APIs where\nit is demanded.\n\nI really wonder what other AMs are doing about these issues.\n\nI think we should either:\n\n1. Demand transactional information off of AMs for all APIs that involve\na projection of transactional information.\n\n2. Have some other component of Postgres supply the transactional\ninformation. This is what I think the upstream UNDO log can probably\nprovide.\n\n3. (Least elegant) Transform tuple table slots into heap tuple table\nslots (since it is the only kind of tuple storage that can supply\ntransactional info) and explicitly fill in the transactional values\ndepending on the context, whenever transactional information is\nprojected.\n\nFor this bug report, I am not sure what is right. Perhaps, to stop the\nbleeding temporarily, we could use the pi_PartitionTupleSlot and assume\nthat the AM needs to provide the transactional info in the respective\ninsert AM API calls, as well as demand a heap slot for partition roots\nand interior nodes. And then later on. we would need a larger effort\nmaking all of these APIs not really demand transactional information.\nPerhaps the UNDO framework will come to the rescue.\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Thu, 9 Jul 2020 10:56:10 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi Soumyadeep,\n\nOn Fri, Jul 10, 2020 at 2:56 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n>\n> Hey Amit,\n>\n> On Thu, Jul 9, 2020 at 12:16 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> > By the way, what happens today if you do INSERT INTO a_zedstore_table\n> > ... RETURNING xmin? Do you get an error \"xmin is unrecognized\" or\n> > some such in slot_getsysattr() when trying to project the RETURNING\n> > list?\n> >\n> We get garbage values for xmin and cmin. If we request cmax/xmax, we get\n> an ERROR from slot_getsystattr()->tts_zedstore_getsysattr():\n> \"zedstore tuple table slot does not have system attributes (except xmin\n> and cmin)\"\n>\n> A ZedstoreTupleTableSlot only stores xmin and xmax. Also,\n> zedstoream_insert(), which is the tuple_insert() implementation, does\n> not supply the xmin/cmin, thus making those values garbage.\n>\n> For context, Zedstore has its own UNDO log implementation to act as\n> storage for transaction information. (which is intended to be replaced\n> with the upstream UNDO log in the future).\n>\n> The above behavior is not just restricted to INSERT..RETURNING, right\n> now. If we do a select <tx_column> from foo in Zedstore, the behavior is\n> the same. The transaction information is never returned from Zedstore\n> in tableam calls that don't demand transactional information be\n> used/returned. If you ask it to do a tuple_satisfies_snapshot(), OTOH,\n> it will use the transactional information correctly. It will also\n> populate TM_FailureData, which contains xmax and cmax, in the APIs where\n> it is demanded.\n>\n> I really wonder what other AMs are doing about these issues.\n>\n> I think we should either:\n>\n> 1. Demand transactional information off of AMs for all APIs that involve\n> a projection of transactional information.\n>\n> 2. Have some other component of Postgres supply the transactional\n> information. This is what I think the upstream UNDO log can probably\n> provide.\n\nSo even if an AM's table_tuple_insert() itself doesn't populate the\ntransaction info into the slot handed to it, maybe as an optimization,\nit does not sound entirely unreasonable to expect that the AM's\nslot_getsysattr() callback returns it correctly when projecting a\ntarget list containing system columns. We shouldn't really need any\nnew core code to get the transaction-related system columns while\nthere exists a perfectly reasonable channel for it to arrive through\n-- TupleTableSlots. I suppose there's a reason why we allow AMs to\nprovide their own slot callbacks.\n\nWhether an AM uses UNDO log or something else to manage the\ntransaction info is up to the AM, so I don't see why the AMs\nthemselves shouldn't be in charge of returning that info, because only\nthey know where it is.\n\n> 3. (Least elegant) Transform tuple table slots into heap tuple table\n> slots (since it is the only kind of tuple storage that can supply\n> transactional info) and explicitly fill in the transactional values\n> depending on the context, whenever transactional information is\n> projected.\n>\n> For this bug report, I am not sure what is right. Perhaps, to stop the\n> bleeding temporarily, we could use the pi_PartitionTupleSlot and assume\n> that the AM needs to provide the transactional info in the respective\n> insert AM API calls,\n\nAs long as the AM's slot_getsysattr() callback returns the correct\nvalue, this works.\n\n> as well as demand a heap slot for partition roots\n> and interior nodes.\n\nIt would be a compromise on the core's part to use \"heap\" slots for\npartitioned tables, because they don't have a valid table AM.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Jul 2020 15:23:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Reading my own words, I think I must fix an ambiguity:\n\nOn Fri, Jul 10, 2020 at 3:23 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> So even if an AM's table_tuple_insert() itself doesn't populate the\n> transaction info into the slot handed to it, maybe as an optimization,\n> it does not sound entirely unreasonable to expect that the AM's\n> slot_getsysattr() callback returns it correctly when projecting a\n> target list containing system columns.\n\nThe \"maybe as an optimization\" refers to the part of the sentence that\ncomes before it. That is, I mean table_tuple_insert() may choose to\nnot populate the transaction info in the slot as an optimization.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Jul 2020 20:43:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On Thu, Jun 4, 2020 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Pavel Biryukov <79166341370@yandex.ru> writes:\n> > Hello. I'm catch error \"virtual tuple table slot does not have system\n> > attributes\" when inserting row into partitioned table with RETURNING\n> > xmin\n>\n> Reproduced here. The example works in v11, and in v10 if you remove\n> the unnecessary-to-the-example primary key, so it seems like a clear\n> regression. I didn't dig for the cause but I suppose it's related\n> to Andres' slot-related changes.\n\nI wonder whether it's really correct to classify this as a bug. The\nsubsequent discussion essentially boils down to this: the partitioned\ntable's children could use any AM, and they might not all use the same\nAM. The system columns that are relevant for the heap may therefore be\nrelevant to all, some, or none of the children. In fact, any fixed\nkind of tuple table slot we might choose to use for the parent has\nthis problem. If all of the children are of the same type -- and today\nthat would have to be heap -- then using that type of tuple table slot\nfor the parent as well would make sense. But there's no real reason\nwhy that's the correct answer in general. If the children are all of\nsome other type, using a heap slot for the parent is wrong; and if\nthey're all of different types, it's unclear that anything other than\na virtual slot makes any sense.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:21:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 4, 2020 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Pavel Biryukov <79166341370@yandex.ru> writes:\n>>> Hello. I'm catch error \"virtual tuple table slot does not have system\n>>> attributes\" when inserting row into partitioned table with RETURNING\n>>> xmin\n\n> I wonder whether it's really correct to classify this as a bug. The\n> subsequent discussion essentially boils down to this: the partitioned\n> table's children could use any AM, and they might not all use the same\n> AM. The system columns that are relevant for the heap may therefore be\n> relevant to all, some, or none of the children. In fact, any fixed\n> kind of tuple table slot we might choose to use for the parent has\n> this problem. If all of the children are of the same type -- and today\n> that would have to be heap -- then using that type of tuple table slot\n> for the parent as well would make sense. But there's no real reason\n> why that's the correct answer in general. If the children are all of\n> some other type, using a heap slot for the parent is wrong; and if\n> they're all of different types, it's unclear that anything other than\n> a virtual slot makes any sense.\n\nWell, if we want to allow such scenarios then we need to forbid queries\nfrom accessing \"system columns\" of a partitioned table, much as we do for\nviews. Failing way down inside the executor seems quite unacceptable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:33:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hello all! I just want to point that Npgsql provider for .Net Core builds queries like that (RETURNING xmin) to keep track for concurrency.This bug stops us from moving to partitioned tables in Postgres 12 with Npgsql. https://www.npgsql.org/efcore/index.html -- С уважением, Павел 11.08.2020, 18:34, \"Tom Lane\" <tgl@sss.pgh.pa.us>:Robert Haas <robertmhaas@gmail.com> writes: On Thu, Jun 4, 2020 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: Pavel Biryukov <79166341370@yandex.ru> writes: Hello. I'm catch error \"virtual tuple table slot does not have system attributes\" when inserting row into partitioned table with RETURNING xmin I wonder whether it's really correct to classify this as a bug. The subsequent discussion essentially boils down to this: the partitioned table's children could use any AM, and they might not all use the same AM. The system columns that are relevant for the heap may therefore be relevant to all, some, or none of the children. In fact, any fixed kind of tuple table slot we might choose to use for the parent has this problem. If all of the children are of the same type -- and today that would have to be heap -- then using that type of tuple table slot for the parent as well would make sense. But there's no real reason why that's the correct answer in general. If the children are all of some other type, using a heap slot for the parent is wrong; and if they're all of different types, it's unclear that anything other than a virtual slot makes any sense.Well, if we want to allow such scenarios then we need to forbid queriesfrom accessing \"system columns\" of a partitioned table, much as we do forviews. Failing way down inside the executor seems quite unacceptable. regards, tom lane",
"msg_date": "Tue, 11 Aug 2020 19:02:57 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 12:02 PM Pavel Biryukov <79166341370@yandex.ru> wrote:\n> I just want to point that Npgsql provider for .Net Core builds queries like that (RETURNING xmin) to keep track for concurrency.\n> This bug stops us from moving to partitioned tables in Postgres 12 with Npgsql.\n\nThat's certainly a good reason to try to make it work. And we can make\nit work, if we're willing to assume that everything's a heap table.\nBut at some point, that hopefully won't be true any more, and then\nthis whole idea becomes pretty dubious. I think we shouldn't wait\nuntil it happens to start thinking about that problem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Aug 2020 13:13:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-11 19:02:57 +0300, Pavel Biryukov wrote:\n> I just want to point that Npgsql provider for .Net Core builds queries like\n> that (RETURNING�xmin) to keep track for concurrency.\n\nCould you provide a bit more details about what that's actually used\nfor?\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 11 Aug 2020 10:22:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Aug 11, 2020 at 12:02 PM Pavel Biryukov <79166341370@yandex.ru> wrote:\n>> I just want to point that Npgsql provider for .Net Core builds queries like that (RETURNING xmin) to keep track for concurrency.\n>> This bug stops us from moving to partitioned tables in Postgres 12 with Npgsql.\n\n> That's certainly a good reason to try to make it work. And we can make\n> it work, if we're willing to assume that everything's a heap table.\n> But at some point, that hopefully won't be true any more, and then\n> this whole idea becomes pretty dubious. I think we shouldn't wait\n> until it happens to start thinking about that problem.\n\nFor xmin in particular, you don't have to assume \"everything's a heap\".\nWhat you have to assume is \"everything uses MVCC\", which seems a more\ndefensible position. It'll still fall down for foreign tables that are\npartitions, though.\n\nI echo Andres' nearby question about exactly why npgsql has such a\nhard dependency on xmin. Maybe what we need is to try to abstract\nthat a little, and see if we could require all partition members\nto support some unified concept of it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Aug 2020 13:52:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-04 12:57:30 -0400, Tom Lane wrote:\n> Pavel Biryukov <79166341370@yandex.ru> writes:\n> > Hello. I'm catch error \"virtual tuple table slot does not have system\n> > attributes\" when inserting row into partitioned table with RETURNING\n> > xmin\n>\n> Reproduced here. The example works in v11, and in v10 if you remove\n> the unnecessary-to-the-example primary key, so it seems like a clear\n> regression. I didn't dig for the cause but I suppose it's related\n> to Andres' slot-related changes.\n\nThe reason we're getting the failure is that nodeModifyTable.c only is\ndealing with virtual tuple slots, which don't carry visibility\ninformation. Despite actually having infrastructure for creating a\npartition specific slot. If I force check_attrmap_match() to return\nfalse, the example starts working.\n\nI don't really know how to best fix this in the partitioning\ninfrastructure. Currently the determination whether there needs to be\nany conversion between subplan slot and the slot used for insertion is\nsolely based on comparing tuple descriptors. But for better or worse, we\ndon't represent system column accesses in tuple descriptors.\n\nIt's not that hard to force the slot creation & use whenever there's\nreturning, but somehow that feels hackish (but so does plenty other\nthings in execPartition.c). See attached.\n\n\nBut I'm worried that that's not enough: What if somebody in a trigger\nwants to access system columns besides tableoid and tid (which are\nhandled in a generic manner)? Currently - only for partitioned table DML\ngoing through the root table - we'll not have valid values for the\ntrigger. It's pretty dubious imo to use xmin/xmax in triggers, but ...\n\nI suspect we should just unconditionally use\npartrouteinfo->pi_PartitionTupleSlot. Going through\npartrouteinfo->pi_RootToPartitionMap if present, and ExecCopySlot()\notherwise.\n\n\nMedium term I think we should just plain out forbid references to system\ncolumns in partioned tables Or at least insist that all partitions have\nthat column. There's no meaningful way for some AMs to have xmin / xmax\nin a compatible way with heap.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:02:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On 2020-08-11 11:02:31 -0700, Andres Freund wrote:\n> It's not that hard to force the slot creation & use whenever there's\n> returning, but somehow that feels hackish (but so does plenty other\n> things in execPartition.c). See attached.\n\nActually attached this time.",
"msg_date": "Tue, 11 Aug 2020 11:06:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-11 13:52:00 -0400, Tom Lane wrote:\n> For xmin in particular, you don't have to assume \"everything's a heap\".\n> What you have to assume is \"everything uses MVCC\", which seems a more\n> defensible position. It'll still fall down for foreign tables that are\n> partitions, though.\n\nDon't think that necessarily implies having a compatible xmin / xmax\naround. Certainly not a 32bit one. I guess an AM could always return\nInvalidOid, but that doesn't seem particularly helpful either.\n\nI think it'd be better if we actually tried to provide a way to do\nwhatever xmin is being used properly. I've seen many uses of xmin/xmax\nand many of them didn't work at all, and most of remaining ones only\nworked in common cases.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:06:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Sure, Entity Framework is an ORM for .Net (and .Net Core). It has different providers for different databases (NpgSql for Postgres). It uses Optimistic concurrency. The common use case is to use xmin as \"concurrency token\". In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us, classical ORM; When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it as concurrency token for further updates (update is made like \"where id = [id] AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients). https://www.npgsql.org/efcore/modeling/concurrency.html -- С уважением, Павел 11.08.2020, 20:22, \"Andres Freund\" <andres@anarazel.de>:Hi,On 2020-08-11 19:02:57 +0300, Pavel Biryukov wrote: I just want to point that Npgsql provider for .Net Core builds queries like that (RETURNING xmin) to keep track for concurrency.Could you provide a bit more details about what that's actually usedfor?Regards,Andres",
"msg_date": "Tue, 11 Aug 2020 21:31:52 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote:\n> Entity Framework is an ORM for .Net (and .Net Core). It has different providers\n> for different databases (NpgSql for Postgres). It uses Optimistic concurrency.\n> The common use case is to use xmin as \"concurrency token\".\n> �\n> In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and\n> \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us,\n> classical ORM;\n> �\n> When new row is inserted, EF makes an insert with \"RETURNING�xmin\" to keep it\n> as concurrency token for further updates (update is made like \"where id = [id]\n> AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients).\n\nThat's not really a safe use of xmin, e.g. it could have wrapped around\nleading you to not notice a concurrent modification. Nor does it\nproperly deal with multiple statements within a transaction. Perhaps\nthose are low enough risk for you, but I don't xmin is a decent building\nblock for this kind of thing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:39:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": " I don't see a problem with \"wrapping around\" - the row's xmin does not change with freeze (AFAIK). It changes when the row is modified.So event if you hold some entity (with current xmin) for a long time (enough for \"wrap around\") and then try to update it, it will update ok. \"Multiple statements\" possible problems is managed for us by NpgSQL :) -- С уважением, Павел 11.08.2020, 21:39, \"Andres Freund\" <andres@anarazel.de>:Hi,On 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote: Entity Framework is an ORM for .Net (and .Net Core). It has different providers for different databases (NpgSql for Postgres). It uses Optimistic concurrency. The common use case is to use xmin as \"concurrency token\". In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us, classical ORM; When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it as concurrency token for further updates (update is made like \"where id = [id] AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients).That's not really a safe use of xmin, e.g. it could have wrapped aroundleading you to not notice a concurrent modification. Nor does itproperly deal with multiple statements within a transaction. Perhapsthose are low enough risk for you, but I don't xmin is a decent buildingblock for this kind of thing.Greetings,Andres Freund",
"msg_date": "Tue, 11 Aug 2020 21:55:32 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Oh, I understand! There is a 1/(2^32) chance that after wrapping around and update it has the same xmin but different content and we don't notice it :) -- С уважением, Павел 11.08.2020, 21:39, \"Andres Freund\" <andres@anarazel.de>:Hi,On 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote: Entity Framework is an ORM for .Net (and .Net Core). It has different providers for different databases (NpgSql for Postgres). It uses Optimistic concurrency. The common use case is to use xmin as \"concurrency token\". In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us, classical ORM; When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it as concurrency token for further updates (update is made like \"where id = [id] AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients).That's not really a safe use of xmin, e.g. it could have wrapped aroundleading you to not notice a concurrent modification. Nor does itproperly deal with multiple statements within a transaction. Perhapsthose are low enough risk for you, but I don't xmin is a decent buildingblock for this kind of thing.Greetings,Andres Freund",
"msg_date": "Tue, 11 Aug 2020 21:59:18 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-11 21:55:32 +0300, Pavel Biryukov wrote:\n> I don't see a problem with \"wrapping around\" - the row's xmin does not change\n> with freeze (AFAIK). It changes when the row is modified.\n> So event if you hold some entity (with current xmin) for a long time (enough\n> for \"wrap around\") and then try to update it, it will update ok.\n\nThe problem isn't that it won't update ok, it is that it might update\ndespite there being another update since the RETURNING xmin.\ns1) BEGIN;INSERT ... RETURN xmin;COMMIT;\ns2) BEGIN;UPDATE .. WHERE xmin ...; COMMIT;\ns*) WRAPAROUND;\ns1) BEGIN;UPDATE .. WHERE xmin ...; COMMIT;\n\nthis could lead to s1 not noticing that s2 was updated.\n\n- Andres\n\n\n",
"msg_date": "Tue, 11 Aug 2020 12:01:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn Wed, Aug 12, 2020 at 3:02 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-06-04 12:57:30 -0400, Tom Lane wrote:\n> > Pavel Biryukov <79166341370@yandex.ru> writes:\n> > > Hello. I'm catch error \"virtual tuple table slot does not have system\n> > > attributes\" when inserting row into partitioned table with RETURNING\n> > > xmin\n> >\n> > Reproduced here. The example works in v11, and in v10 if you remove\n> > the unnecessary-to-the-example primary key, so it seems like a clear\n> > regression. I didn't dig for the cause but I suppose it's related\n> > to Andres' slot-related changes.\n>\n> The reason we're getting the failure is that nodeModifyTable.c only is\n> dealing with virtual tuple slots, which don't carry visibility\n> information. Despite actually having infrastructure for creating a\n> partition specific slot. If I force check_attrmap_match() to return\n> false, the example starts working.\n>\n> I don't really know how to best fix this in the partitioning\n> infrastructure. Currently the determination whether there needs to be\n> any conversion between subplan slot and the slot used for insertion is\n> solely based on comparing tuple descriptors. But for better or worse, we\n> don't represent system column accesses in tuple descriptors.\n>\n> It's not that hard to force the slot creation & use whenever there's\n> returning, but somehow that feels hackish (but so does plenty other\n> things in execPartition.c). See attached.\n>\n> But I'm worried that that's not enough: What if somebody in a trigger\n> wants to access system columns besides tableoid and tid (which are\n> handled in a generic manner)? Currently - only for partitioned table DML\n> going through the root table - we'll not have valid values for the\n> trigger. It's pretty dubious imo to use xmin/xmax in triggers, but ...\n>\n> I suspect we should just unconditionally use\n> partrouteinfo->pi_PartitionTupleSlot. Going through\n> partrouteinfo->pi_RootToPartitionMap if present, and ExecCopySlot()\n> otherwise.\n\nI see that to be the only way forward even though there will be a\nslight hit in performance in typical cases where a virtual tuple slot\nsuffices.\n\n> Medium term I think we should just plain out forbid references to system\n> columns in partioned tables Or at least insist that all partitions have\n> that column.\n\nPerformance-wise I would prefer the former, because the latter would\ninvolve checking *all* partitions statically in the INSERT case,\nsomething that we've avoided doing so far.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Aug 2020 12:51:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Wed, Aug 12, 2020 at 3:02 AM Andres Freund <andres@anarazel.de> wrote:\n>> Medium term I think we should just plain out forbid references to system\n>> columns in partioned tables Or at least insist that all partitions have\n>> that column.\n\n> Performance-wise I would prefer the former, because the latter would\n> involve checking *all* partitions statically in the INSERT case,\n> something that we've avoided doing so far.\n\nIt's not like we don't have a technology for doing that. The way this\nideally would work, IMV, is that the parent partitioned table either\nhas or doesn't have a given system column. If it does, then every\nchild must too, just like the way things work for user columns.\n\nThis'd require (a) some sort of consensus about which kinds of system\ncolumns can make sense --- as Andres noted, 32-bit xmin might not be\nthe best choice here --- and (b) some notation for users to declare\nwhich of these columns they want in a partitioned table. Once upon\na time we had WITH OIDS, maybe that idea could be extended.\n\nI'm not entirely sure that this is worth all the trouble, but that's\nhow I'd sketch doing it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Aug 2020 00:08:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 1:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Wed, Aug 12, 2020 at 3:02 AM Andres Freund <andres@anarazel.de> wrote:\n> >> Medium term I think we should just plain out forbid references to system\n> >> columns in partioned tables Or at least insist that all partitions have\n> >> that column.\n>\n> > Performance-wise I would prefer the former, because the latter would\n> > involve checking *all* partitions statically in the INSERT case,\n> > something that we've avoided doing so far.\n>\n> It's not like we don't have a technology for doing that. The way this\n> ideally would work, IMV, is that the parent partitioned table either\n> has or doesn't have a given system column. If it does, then every\n> child must too, just like the way things work for user columns.\n\nAh, I may have misread \"insisting that all partitions have a given\nsystem column\" as doing that on every query, but maybe Andres meant\nwhat you are describing here.\n\n> This'd require (a) some sort of consensus about which kinds of system\n> columns can make sense --- as Andres noted, 32-bit xmin might not be\n> the best choice here --- and (b) some notation for users to declare\n> which of these columns they want in a partitioned table. Once upon\n> a time we had WITH OIDS, maybe that idea could be extended.\n\nFor (a), isn't there already a consensus that all table AMs support at\nleast the set of system columns described in 5.5 System Columns [1]\neven if the individual members of that set are no longer the best\nchoice at this point? I do agree that we'd need (b) in some form to\nrequire AMs to fill those columns which it seems is not the case\ncurrently.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/docs/current/ddl-system-columns.html\n\n\n",
"msg_date": "Wed, 12 Aug 2020 14:19:12 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": " With \"wrap around\" update when the same xmin value is assigned to the row (leading to concurrency detection problem) the solution may be to make sure in Postgres that the new xmin value is different from previous (freezed row with xmin before \"wrap around\") -- С уважением, Павел 11.08.2020, 21:39, \"Andres Freund\" <andres@anarazel.de>:Hi,On 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote: Entity Framework is an ORM for .Net (and .Net Core). It has different providers for different databases (NpgSql for Postgres). It uses Optimistic concurrency. The common use case is to use xmin as \"concurrency token\". In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us, classical ORM; When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it as concurrency token for further updates (update is made like \"where id = [id] AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients).That's not really a safe use of xmin, e.g. it could have wrapped aroundleading you to not notice a concurrent modification. Nor does itproperly deal with multiple statements within a transaction. Perhapsthose are low enough risk for you, but I don't xmin is a decent buildingblock for this kind of thing.Greetings,Andres Freund",
"msg_date": "Wed, 12 Aug 2020 11:06:23 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\r\n\r\nwhile the request for returning xmin of partitioned tables is still valid, i’d like to add some information and a possible workaround.\r\n\r\nOn 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote:\r\n Entity Framework is an ORM for .Net (and .Net Core). It has different providers\r\n for different databases (NpgSql for Postgres). It uses Optimistic concurrency.\r\n The common use case is to use xmin as \"concurrency token\".\r\n\r\n In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and\r\n \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us,\r\n classical ORM;\r\n\r\n When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it\r\n as concurrency token for further updates (update is made like \"where id = [id]\r\n AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients).\r\n\r\nNeither the Entity Framework, nor npgsql rely on the column xmin. Both don’t know about this column in their codebase.\r\nIn the case oft he EF i’m sure that this holds true for all versions, since it is designed as DBMS independant, and as such will never know anything about a PostgreSQL specific column.\r\nAlso you can use any ADO.Net provider to connect to a concrete DBMS – i for example use dotConnect for PostgreSQL because it provided more features and less bugs as Npgsql at the time of decission.\r\nAs for Npgsql i have only checked that the current HEAD has no reference to xmin in its source code.\r\n\r\nWith that in mind, i assume the OP included the column xmin in his Entity Model by himself and set the ConcurrencyMode to fixed for that column.\r\nAs xmin is a system column that the EF should never try to update (PostgreSQL will reject this attempt, i think), i’d suggest using a self defined column (row_version for example) and either use triggers on update and insert to increment its value (works even with updates outside of EF) or let the EF do the increment.\r\n\r\nRegards\r\nWilm Hoyer.\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n\nHi,\nwhile the request for returning xmin of partitioned tables is still valid, i’d like to add some information and a possible workaround.\n\r\nOn 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote:\n\n Entity Framework is an ORM for .Net (and .Net Core). It has different providers\r\n for different databases (NpgSql for Postgres). It uses Optimistic concurrency.\r\n The common use case is to use xmin as \"concurrency token\".\r\n \r\n In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and\r\n \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us,\r\n classical ORM;\r\n \r\n When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it\r\n as concurrency token for further updates (update is made like \"where id = [id]\r\n AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients).\n \nNeither the Entity Framework, nor npgsql rely on the column xmin. Both don’t know about this column in their codebase.\nIn the case oft he EF i’m sure that this holds true for all versions, since it is designed as DBMS independant, and as such will never know anything about a PostgreSQL specific column.\nAlso you can use any ADO.Net provider to connect to a concrete DBMS – i for example use dotConnect for PostgreSQL because it provided more features and less bugs as Npgsql at the time of decission.\nAs for Npgsql i have only checked that the current HEAD has no reference to xmin in its source code.\n \nWith that in mind, i assume the OP included the column xmin in his Entity Model by himself and set the ConcurrencyMode to fixed for that column.\nAs xmin is a system column that the EF should never try to update (PostgreSQL will reject this attempt, i think), i’d suggest using a self defined column (row_version for example) and either use triggers on update and insert to increment\r\n its value (works even with updates outside of EF) or let the EF do the increment.\n \nRegards\nWilm Hoyer.",
"msg_date": "Wed, 12 Aug 2020 09:35:04 +0000",
"msg_from": "Wilm Hoyer <W.Hoyer@dental-vision.de>",
"msg_from_op": false,
"msg_subject": "AW: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Wilm, Have a look, there is even an extension method for xmin (stable branch): https://github.com/npgsql/efcore.pg/blob/stable/src/EFCore.PG/Extensions/NpgsqlEntityTypeBuilderExtensions.cs EF Core is mostly independent from DB, but there usually are some \"tweaks\" for each database that you should know (like for SQL server the native concurrency token should be like byte[]:[TimeStamp]public byte[] RowVersion { get; set; }) Additional \"self defined column\" for billion rows tables leads to additional space needed. We use partitioning when the tables are LARGE :) This works fine in 10, 11, it's strange it is broken in 12 (from db users point of view, I haven't examined the sources of PG for internals...) -- С уважением, Павел 12.08.2020, 12:35, \"Wilm Hoyer\" <w.hoyer@dental-vision.de>:Hi,while the request for returning xmin of partitioned tables is still valid, i’d like to add some information and a possible workaround.On 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote: Entity Framework is an ORM for .Net (and .Net Core). It has different providers for different databases (NpgSql for Postgres). It uses Optimistic concurrency. The common use case is to use xmin as \"concurrency token\". In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us, classical ORM; When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it as concurrency token for further updates (update is made like \"where id = [id] AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients). Neither the Entity Framework, nor npgsql rely on the column xmin. Both don’t know about this column in their codebase.In the case oft he EF i’m sure that this holds true for all versions, since it is designed as DBMS independant, and as such will never know anything about a PostgreSQL specific column.Also you can use any ADO.Net provider to connect to a concrete DBMS – i for example use dotConnect for PostgreSQL because it provided more features and less bugs as Npgsql at the time of decission.As for Npgsql i have only checked that the current HEAD has no reference to xmin in its source code. With that in mind, i assume the OP included the column xmin in his Entity Model by himself and set the ConcurrencyMode to fixed for that column.As xmin is a system column that the EF should never try to update (PostgreSQL will reject this attempt, i think), i’d suggest using a self defined column (row_version for example) and either use triggers on update and insert to increment its value (works even with updates outside of EF) or let the EF do the increment. RegardsWilm Hoyer. ",
"msg_date": "Wed, 12 Aug 2020 13:10:52 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: AW: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "> Wilm,\r\n\r\n> Have a look, there is even an extension method for xmin (stable branch):\r\n\r\n> https://github.com/npgsql/efcore.pg/blob/stable/src/EFCore.PG/Extensions/NpgsqlEntityTypeBuilderExtensions.cs\r\n\r\n> EF Core is mostly independent from DB, but there usually are some \"tweaks\" for each database that you should know (like for SQL server the native concurrency token should be like byte[]:\r\n\r\n 1. [TimeStamp]\r\n 2. public byte[] RowVersion { get; set; }\r\n 3. )\r\n\r\n> Additional \"self defined column\" for billion rows tables leads to additional space needed. We use partitioning when the tables are LARGE :)\r\n\r\n> This works fine in 10, 11, it's strange it is broken in 12 (from db users point of view, I haven't examined the sources of PG for internals...)\r\n\r\nSorry, i had not looked into the extension since it is not part of core npgsql. As it states: Npgsql.EntityFrameworkCore.PostgreSQL is an Entity Framework Core provider built on top of Npgsql<https://github.com/npgsql/npgsql>. (and we left npgsql before this extension was started).\r\nI guess it would be easy for them to implement a solution, because that basically means adopting the relevant part from the EF SqlServer implementation. But i fear you’re out of luck with them, as support for table partitioning is still an open issue in their project.\r\n\r\nBest regards\r\nWilm.\r\n\n\n\n\n\n\n\n\n\n \n\n> Wilm,\n\n\n \n\n\n> Have a look, there is even an extension method for xmin (stable branch):\n\n\n \n\n\n> \r\nhttps://github.com/npgsql/efcore.pg/blob/stable/src/EFCore.PG/Extensions/NpgsqlEntityTypeBuilderExtensions.cs\n\n\n \n\n\n> EF Core is mostly independent from DB, but there usually are some \"tweaks\" for each database that you should know (like for SQL server the native concurrency token should be like byte[]:\n\n\n[TimeStamp]\npublic\nbyte[]\nRowVersion\n{\nget;\nset;\n}\n)\n\n\n \n\n\n> Additional \"self defined column\" for billion rows tables leads to additional space needed. We use partitioning when the tables are LARGE :)\n\n\n \n\n\n> This works fine in 10, 11, it's strange it is broken in 12 (from db users point of view, I haven't examined the sources of PG for internals...)\n\n\n \n\n\nSorry, i had not looked into the extension since it is not part of core npgsql. As it states: Npgsql.EntityFrameworkCore.PostgreSQL is an Entity Framework Core provider built on top of\r\nNpgsql. (and we left npgsql before this extension was started).\nI guess it would be easy for them to implement a solution, because that basically means adopting the relevant part from the EF SqlServer implementation. But i fear you’re out of luck with them, as support for table partitioning is still\r\n an open issue in their project.\n \nBest regards\nWilm.",
"msg_date": "Wed, 12 Aug 2020 10:57:49 +0000",
"msg_from": "Wilm Hoyer <W.Hoyer@dental-vision.de>",
"msg_from_op": false,
"msg_subject": "AW: AW: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Wilm, For different developers to begin some \"adoptation\" there should be a clear \"Breaking changes\" record in Postgres version history. For now we consider it bug (worked before, not working now), that should be just fixed... -- С уважением, Павел 12.08.2020, 13:57, \"Wilm Hoyer\" <w.hoyer@dental-vision.de>: > Wilm, > Have a look, there is even an extension method for xmin (stable branch): > https://github.com/npgsql/efcore.pg/blob/stable/src/EFCore.PG/Extensions/NpgsqlEntityTypeBuilderExtensions.cs > EF Core is mostly independent from DB, but there usually are some \"tweaks\" for each database that you should know (like for SQL server the native concurrency token should be like byte[]:[TimeStamp]public byte[] RowVersion { get; set; }) > Additional \"self defined column\" for billion rows tables leads to additional space needed. We use partitioning when the tables are LARGE :) > This works fine in 10, 11, it's strange it is broken in 12 (from db users point of view, I haven't examined the sources of PG for internals...) Sorry, i had not looked into the extension since it is not part of core npgsql. As it states: Npgsql.EntityFrameworkCore.PostgreSQL is an Entity Framework Core provider built on top of Npgsql. (and we left npgsql before this extension was started).I guess it would be easy for them to implement a solution, because that basically means adopting the relevant part from the EF SqlServer implementation. But i fear you’re out of luck with them, as support for table partitioning is still an open issue in their project. Best regardsWilm.",
"msg_date": "Wed, 12 Aug 2020 14:08:15 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: AW: AW: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-12 14:19:12 +0900, Amit Langote wrote:\n> On Wed, Aug 12, 2020 at 1:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> > > On Wed, Aug 12, 2020 at 3:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > >> Medium term I think we should just plain out forbid references to system\n> > >> columns in partioned tables Or at least insist that all partitions have\n> > >> that column.\n> >\n> > > Performance-wise I would prefer the former, because the latter would\n> > > involve checking *all* partitions statically in the INSERT case,\n> > > something that we've avoided doing so far.\n> >\n> > It's not like we don't have a technology for doing that. The way this\n> > ideally would work, IMV, is that the parent partitioned table either\n> > has or doesn't have a given system column. If it does, then every\n> > child must too, just like the way things work for user columns.\n> \n> Ah, I may have misread \"insisting that all partitions have a given\n> system column\" as doing that on every query, but maybe Andres meant\n> what you are describing here.\n\nI think Tom's formulation makes sense.\n\n\n> > This'd require (a) some sort of consensus about which kinds of system\n> > columns can make sense --- as Andres noted, 32-bit xmin might not be\n> > the best choice here --- and (b) some notation for users to declare\n> > which of these columns they want in a partitioned table. Once upon\n> > a time we had WITH OIDS, maybe that idea could be extended.\n> \n> For (a), isn't there already a consensus that all table AMs support at\n> least the set of system columns described in 5.5 System Columns [1]\n> even if the individual members of that set are no longer the best\n> choice at this point?\n\nI don't think there is. I don't think xmin/xmax/cmin/cmax should be\namong those. tableoid and ctid are handled by generic code, so I think\nthey would be among the required columns.\n\nWhere do you see that concensus?\n\n\n> I do agree that we'd need (b) in some form to require AMs to fill\n> those columns which it seems is not the case currently.\n\nHm? What are you referencing here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Aug 2020 09:27:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 13, 2020 at 1:27 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-08-12 14:19:12 +0900, Amit Langote wrote:\n> > On Wed, Aug 12, 2020 at 1:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > It's not like we don't have a technology for doing that. The way this\n> > > ideally would work, IMV, is that the parent partitioned table either\n> > > has or doesn't have a given system column. If it does, then every\n> > > child must too, just like the way things work for user columns.\n> >\n> > Ah, I may have misread \"insisting that all partitions have a given\n> > system column\" as doing that on every query, but maybe Andres meant\n> > what you are describing here.\n>\n> I think Tom's formulation makes sense.\n\nYes, I agree.\n\n> > > This'd require (a) some sort of consensus about which kinds of system\n> > > columns can make sense --- as Andres noted, 32-bit xmin might not be\n> > > the best choice here --- and (b) some notation for users to declare\n> > > which of these columns they want in a partitioned table. Once upon\n> > > a time we had WITH OIDS, maybe that idea could be extended.\n> >\n> > For (a), isn't there already a consensus that all table AMs support at\n> > least the set of system columns described in 5.5 System Columns [1]\n> > even if the individual members of that set are no longer the best\n> > choice at this point?\n>\n> I don't think there is. I don't think xmin/xmax/cmin/cmax should be\n> among those. tableoid and ctid are handled by generic code, so I think\n> they would be among the required columns.\n>\n> Where do you see that concensus?\n\nPerhaps I was wrong to use the word consensus. I was trying to say\nthat table AM extensibility work didn't change the description in 5.5\nSystem Columns, which still says *all* tables, irrespective of their\nAM, implicitly have those columns, so I assumed we continue to ask AM\nauthors to have space for those columns in their tuples. Maybe, that\nlist is a legacy of heapam and updating it in am AM-agnostic manner\nwould require consensus.\n\n> > I do agree that we'd need (b) in some form to require AMs to fill\n> > those columns which it seems is not the case currently.\n>\n> Hm? What are you referencing here?\n\nI meant that WITH <a-system-column> specified on a table presumably\nforces an AM to ensure that the column is present in its tuples, like\nWITH OIDS specification on a table would force heapam to initialize\nthe oid system column in all tuples being inserted into that table.\nLack of the same notation for other system columns means that AMs\ndon't feel forced to ensure those columns are present in their tuples.\nAlso, having the WITH notation makes it easy to enforce that all\npartitions in a given hierarchy have AMs that respect the WITH\nspecification.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Aug 2020 13:13:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Aug 13, 2020 at 1:27 AM Andres Freund <andres@anarazel.de> wrote:\n>> Hm? What are you referencing here?\n\n> I meant that WITH <a-system-column> specified on a table presumably\n> forces an AM to ensure that the column is present in its tuples, like\n> WITH OIDS specification on a table would force heapam to initialize\n> the oid system column in all tuples being inserted into that table.\n> Lack of the same notation for other system columns means that AMs\n> don't feel forced to ensure those columns are present in their tuples.\n\nI might be missing some subtlety here, but it seems to me that a\ngiven table AM will probably have a fixed set of system columns\nthat it provides. For example heapam would provide exactly the\ncurrent set of columns, no matter what. I'm imagining that\n(1) if the parent partitioned table has column X, and the proposed\nchild-table AM doesn't provide that, then we just refuse to\ncreate/attach the partition.\n(2) if a child-table AM provides some system column that the\nparent does not, then you can access that column when querying\nthe child table directly, but not when querying the parent.\nThis works just like extra child columns in traditional PG\ninheritance.\n\nGiven these rules, an AM isn't expected to do anything conditional at\nruntime: it just provides what it provides. Instead we have an issue\nto solve in or near the TupleTableSlot APIs, namely how to deal with\ntuples that don't all have the same system columns. But we'd have\nthat problem anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 00:35:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi! What's the state with this issue? -- С уважением, Павел 13.08.2020, 07:13, \"Amit Langote\" <amitlangote09@gmail.com>:Hi,On Thu, Aug 13, 2020 at 1:27 AM Andres Freund <andres@anarazel.de> wrote: On 2020-08-12 14:19:12 +0900, Amit Langote wrote: > On Wed, Aug 12, 2020 at 1:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > It's not like we don't have a technology for doing that. The way this > > ideally would work, IMV, is that the parent partitioned table either > > has or doesn't have a given system column. If it does, then every > > child must too, just like the way things work for user columns. > > Ah, I may have misread \"insisting that all partitions have a given > system column\" as doing that on every query, but maybe Andres meant > what you are describing here. I think Tom's formulation makes sense.Yes, I agree. > > This'd require (a) some sort of consensus about which kinds of system > > columns can make sense --- as Andres noted, 32-bit xmin might not be > > the best choice here --- and (b) some notation for users to declare > > which of these columns they want in a partitioned table. Once upon > > a time we had WITH OIDS, maybe that idea could be extended. > > For (a), isn't there already a consensus that all table AMs support at > least the set of system columns described in 5.5 System Columns [1] > even if the individual members of that set are no longer the best > choice at this point? I don't think there is. I don't think xmin/xmax/cmin/cmax should be among those. tableoid and ctid are handled by generic code, so I think they would be among the required columns. Where do you see that concensus?Perhaps I was wrong to use the word consensus. I was trying to saythat table AM extensibility work didn't change the description in 5.5System Columns, which still says *all* tables, irrespective of theirAM, implicitly have those columns, so I assumed we continue to ask AMauthors to have space for those columns in their tuples. Maybe, thatlist is a legacy of heapam and updating it in am AM-agnostic mannerwould require consensus. > I do agree that we'd need (b) in some form to require AMs to fill > those columns which it seems is not the case currently. Hm? What are you referencing here?I meant that WITH <a-system-column> specified on a table presumablyforces an AM to ensure that the column is present in its tuples, likeWITH OIDS specification on a table would force heapam to initializethe oid system column in all tuples being inserted into that table.Lack of the same notation for other system columns means that AMsdon't feel forced to ensure those columns are present in their tuples.Also, having the WITH notation makes it easy to enforce that allpartitions in a given hierarchy have AMs that respect the WITHspecification. --Amit LangoteEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 27 Oct 2020 16:55:17 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 10:55 PM Pavel Biryukov <79166341370@yandex.ru> wrote:\n>\n> Hi!\n>\n> What's the state with this issue?\n\nI think that the patch that Andres Freund posted earlier on this\nthread [1] would be fine as a workaround at least for stable releases\nv12 and v13. I have attached with this email a rebased version of\nthat patch, although I also made a few changes. The idea of the patch\nis to allocate and use a partition-specific *non-virtual* slot, one\nthat is capable of providing system columns when the RETURNING\nprojection needs them. Andres' patch would allocate such a slot even\nif RETURNING contained no system columns, whereas I changed the slot\ncreation code stanza to also check that RETURNING indeed contains\nsystem columns. I've attached 2 patch files: one for HEAD and another\nfor v12 and v13 branches.\n\nThat said, the discussion on what to do going forward to *cleanly*\nsupport accessing system columns through partitioned tables is\npending, but maybe the \"workaround\" fix will be enough in the meantime\n(at least v12 and v13 can only get a workaround fix).\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/20200811180629.zx57llliqcmcgfyr%40alap3.anarazel.de",
"msg_date": "Wed, 28 Oct 2020 23:01:16 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi Community\r\n\r\nRelated to ef core I found another issue. I try to convert a database column from bool to int with ef core migrations.\r\nSo if I generate the migration file it will look like as following: ALTER TABLE \"Customer\" ALTER COLUMN \"State\" TYPE INT\r\nBut in Postgres this statement need the “using” extension so it should look like this: ALTER TABLE \" Customer\" ALTER COLUMN \" State \" TYPE INT USING \" State\"::integer;\r\n\r\nIn my opinion ef core should be generate the right statement (with using statements) out of the box, so is there an issue with the NPG Provider?\r\n\r\nThank you in advance.\r\n\r\nKind regards\r\nKenan\r\n\r\nVon: Wilm Hoyer <W.Hoyer@dental-vision.de>\r\nGesendet: Mittwoch, 12. August 2020 11:35\r\nAn: Pavel Biryukov <79166341370@yandex.ru>; Andres Freund <andres@anarazel.de>\r\nCc: Tom Lane <tgl@sss.pgh.pa.us>; Robert Haas <robertmhaas@gmail.com>; PostgreSQL mailing lists <pgsql-bugs@lists.postgresql.org>\r\nBetreff: AW: posgres 12 bug (partitioned table)\r\n\r\n\r\nHi,\r\n\r\nwhile the request for returning xmin of partitioned tables is still valid, i’d like to add some information and a possible workaround.\r\n\r\nOn 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote:\r\n Entity Framework is an ORM for .Net (and .Net Core). It has different providers\r\n for different databases (NpgSql for Postgres). It uses Optimistic concurrency.\r\n The common use case is to use xmin as \"concurrency token\".\r\n\r\n In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and\r\n \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us,\r\n classical ORM;\r\n\r\n When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it\r\n as concurrency token for further updates (update is made like \"where id = [id]\r\n AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients).\r\n\r\nNeither the Entity Framework, nor npgsql rely on the column xmin. Both don’t know about this column in their codebase.\r\nIn the case oft he EF i’m sure that this holds true for all versions, since it is designed as DBMS independant, and as such will never know anything about a PostgreSQL specific column.\r\nAlso you can use any ADO.Net provider to connect to a concrete DBMS – i for example use dotConnect for PostgreSQL because it provided more features and less bugs as Npgsql at the time of decission.\r\nAs for Npgsql i have only checked that the current HEAD has no reference to xmin in its source code.\r\n\r\nWith that in mind, i assume the OP included the column xmin in his Entity Model by himself and set the ConcurrencyMode to fixed for that column.\r\nAs xmin is a system column that the EF should never try to update (PostgreSQL will reject this attempt, i think), i’d suggest using a self defined column (row_version for example) and either use triggers on update and insert to increment its value (works even with updates outside of EF) or let the EF do the increment.\r\n\r\nRegards\r\nWilm Hoyer.\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi Community\n \nRelated to ef core I found another issue. I try to convert a database column from bool to int with ef core migrations.\nSo if I generate the migration file it will look like as following:\r\n ALTER TABLE \"Customer\" ALTER COLUMN \"State\"\r\n TYPE INT \nBut in Postgres this statement need the “using” extension so it should look like this: ALTER TABLE \" Customer\" ALTER COLUMN\r\n \" State \" TYPE INT USING \" State\"::integer;\n \nIn my opinion ef core should be generate the right statement (with using statements) out of the box, so is there an issue\r\n with the NPG Provider? \n \nThank you in advance.\n \nKind regards\nKenan\n \n\n\nVon: Wilm Hoyer <W.Hoyer@dental-vision.de>\r\n\nGesendet: Mittwoch, 12. August 2020 11:35\nAn: Pavel Biryukov <79166341370@yandex.ru>; Andres Freund <andres@anarazel.de>\nCc: Tom Lane <tgl@sss.pgh.pa.us>; Robert Haas <robertmhaas@gmail.com>; PostgreSQL mailing lists <pgsql-bugs@lists.postgresql.org>\nBetreff: AW: posgres 12 bug (partitioned table)\n\n\n \n\nHi,\nwhile the request for returning xmin of partitioned tables is still valid, i’d like to add some information and a possible workaround.\n\r\nOn 2020-08-11 21:31:52 +0300, Pavel Biryukov wrote:\n\n Entity Framework is an ORM for .Net (and .Net Core). It has different providers\r\n for different databases (NpgSql for Postgres). It uses Optimistic concurrency.\r\n The common use case is to use xmin as \"concurrency token\".\r\n \r\n In code we make \"var e = new Entity();\", \"dbContext.Add(e)\" and\r\n \"dbContext.SaveChanges()\" (smth like that), and EF Core constructs sql for us,\r\n classical ORM;\r\n \r\n When new row is inserted, EF makes an insert with \"RETURNING xmin\" to keep it\r\n as concurrency token for further updates (update is made like \"where id = [id]\r\n AND xmin=[xmin]\" to be sure the row hasn't been updated by other clients).\n \nNeither the Entity Framework, nor npgsql rely on the column xmin. Both don’t know about this column in their codebase.\nIn the case oft he EF i’m sure that this holds true for all versions, since it is designed as DBMS independant, and as such will never know anything about a PostgreSQL specific column.\nAlso you can use any ADO.Net provider to connect to a concrete DBMS – i for example use dotConnect for PostgreSQL because it provided more features and less bugs as Npgsql at the time of decission.\nAs for Npgsql i have only checked that the current HEAD has no reference to xmin in its source code.\n \nWith that in mind, i assume the OP included the column xmin in his Entity Model by himself and set the ConcurrencyMode to fixed for that column.\nAs xmin is a system column that the EF should never try to update (PostgreSQL will reject this attempt, i think), i’d suggest using a self defined column (row_version for example) and either use triggers on update and insert\r\n to increment its value (works even with updates outside of EF) or let the EF do the increment.\n \nRegards\nWilm Hoyer.",
"msg_date": "Mon, 7 Dec 2020 13:49:01 +0000",
"msg_from": "<kubilay.dag@post.ch>",
"msg_from_op": false,
"msg_subject": "AW: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On 10/28/20 10:01 AM, Amit Langote wrote:\n> On Tue, Oct 27, 2020 at 10:55 PM Pavel Biryukov <79166341370@yandex.ru> wrote:\n>> Hi!\n>>\n>> What's the state with this issue?\n> I think that the patch that Andres Freund posted earlier on this\n> thread [1] would be fine as a workaround at least for stable releases\n> v12 and v13. I have attached with this email a rebased version of\n> that patch, although I also made a few changes. The idea of the patch\n> is to allocate and use a partition-specific *non-virtual* slot, one\n> that is capable of providing system columns when the RETURNING\n> projection needs them. Andres' patch would allocate such a slot even\n> if RETURNING contained no system columns, whereas I changed the slot\n> creation code stanza to also check that RETURNING indeed contains\n> system columns. I've attached 2 patch files: one for HEAD and another\n> for v12 and v13 branches.\n>\n> That said, the discussion on what to do going forward to *cleanly*\n> support accessing system columns through partitioned tables is\n> pending, but maybe the \"workaround\" fix will be enough in the meantime\n> (at least v12 and v13 can only get a workaround fix).\n> stgresql.org/message-id/20200811180629.zx57llliqcmcgfyr%40alap3.anarazel.de\n\nThis thread has been quiet for while. Does everyone agree with Amit's \nproposal to use this patch as a workaround?\n\nAlso, is there a CF entry for this issue? If so I am unable to find it.\n\nRegards,\n\n-- \n-David\ndavid@pgmasters.net\n\n\n\n",
"msg_date": "Fri, 5 Mar 2021 11:25:12 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "We are waiting for the fix :) -- С уважением, Павел 05.03.2021, 19:25, \"David Steele\" <david@pgmasters.net>:On 10/28/20 10:01 AM, Amit Langote wrote: On Tue, Oct 27, 2020 at 10:55 PM Pavel Biryukov <79166341370@yandex.ru> wrote: Hi! What's the state with this issue? I think that the patch that Andres Freund posted earlier on this thread [1] would be fine as a workaround at least for stable releases v12 and v13. I have attached with this email a rebased version of that patch, although I also made a few changes. The idea of the patch is to allocate and use a partition-specific *non-virtual* slot, one that is capable of providing system columns when the RETURNING projection needs them. Andres' patch would allocate such a slot even if RETURNING contained no system columns, whereas I changed the slot creation code stanza to also check that RETURNING indeed contains system columns. I've attached 2 patch files: one for HEAD and another for v12 and v13 branches. That said, the discussion on what to do going forward to *cleanly* support accessing system columns through partitioned tables is pending, but maybe the \"workaround\" fix will be enough in the meantime (at least v12 and v13 can only get a workaround fix). stgresql.org/message-id/20200811180629.zx57llliqcmcgfyr%40alap3.anarazel.deThis thread has been quiet for while. Does everyone agree with Amit'sproposal to use this patch as a workaround?Also, is there a CF entry for this issue? If so I am unable to find it.Regards, ---Daviddavid@pgmasters.net ",
"msg_date": "Fri, 05 Mar 2021 19:55:18 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Sorry for the noise to everyone who will see the below reply a 2nd\ntime, but doing...\n\nOn Fri, Mar 12, 2021 at 7:16 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> (adding -hackers)\n\n..this is in violation of list rules which I should've known. So I'm\nre-replying after removing -hackers.\n\n> On Sat, Mar 6, 2021 at 1:25 AM David Steele <david@pgmasters.net> wrote:\n> > On 10/28/20 10:01 AM, Amit Langote wrote:\n> > > On Tue, Oct 27, 2020 at 10:55 PM Pavel Biryukov <79166341370@yandex.ru> wrote:\n> > >> Hi!\n> > >>\n> > >> What's the state with this issue?\n> > > I think that the patch that Andres Freund posted earlier on this\n> > > thread [1] would be fine as a workaround at least for stable releases\n> > > v12 and v13. I have attached with this email a rebased version of\n> > > that patch, although I also made a few changes. The idea of the patch\n> > > is to allocate and use a partition-specific *non-virtual* slot, one\n> > > that is capable of providing system columns when the RETURNING\n> > > projection needs them. Andres' patch would allocate such a slot even\n> > > if RETURNING contained no system columns, whereas I changed the slot\n> > > creation code stanza to also check that RETURNING indeed contains\n> > > system columns. I've attached 2 patch files: one for HEAD and another\n> > > for v12 and v13 branches.\n> > >\n> > > That said, the discussion on what to do going forward to *cleanly*\n> > > support accessing system columns through partitioned tables is\n> > > pending, but maybe the \"workaround\" fix will be enough in the meantime\n> > > (at least v12 and v13 can only get a workaround fix).\n> > > stgresql.org/message-id/20200811180629.zx57llliqcmcgfyr%40alap3.anarazel.de\n> >\n> > This thread has been quiet for while.\n>\n> Sorry David, I'm a week late myself in noticing this reminder.\n>\n> > Does everyone agree with Amit's\n> > proposal to use this patch as a workaround?\n>\n> I'm attaching an updated version of the patches with a commit message\n> this time. Two files like before -- one that applies to both v12 and\n> v13 and another to the master branch.\n>\n> > Also, is there a CF entry for this issue? If so I am unable to find it.\n>\n> Unfortunately, I never created one.\n\nPatches as described above are attached again.\n\nForgot to ask in the last reply: for HEAD, should we consider simply\npreventing accessing system columns when querying through a parent\npartitioned table, as was also mentioned upthread?\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 12 Mar 2021 21:47:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Forgot to ask in the last reply: for HEAD, should we consider simply\n> preventing accessing system columns when querying through a parent\n> partitioned table, as was also mentioned upthread?\n\nIndeed, it seems like this thread is putting a fair amount of work\ninto a goal that we shouldn't even be trying to do. We gave up the\nassumption that a partitioned table's children would have consistent\nsystem columns the moment we decided to allow foreign tables as\npartition leaf tables; and it's only going to get worse as alternate\ntable AMs become more of a thing. So now I'm thinking the right thing\nto be working on is banning access to system columns via partition\nparent tables (other than tableoid, which ought to work).\n\nI'm not quite sure whether the right way to do that is just not\nmake pg_attribute entries for system columns of partitioned tables,\nas we do for views; or make them but have a code-driven error if\nyou try to use one in a query. The second way is uglier, but\nit should allow a more on-point error message. OTOH, if we start\nto have different sets of system columns for different table AMs,\nwe can't have partitioned tables covering all of those sets.\n\nIn the meantime, if the back branches fail with something like\n\"virtual tuple table slot does not have system attributes\" when\ntrying to do this, that's not great but I'm not sure we should\nbe putting effort into improving it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Apr 2021 15:10:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-21 15:10:09 -0400, Tom Lane wrote:\n> Indeed, it seems like this thread is putting a fair amount of work\n> into a goal that we shouldn't even be trying to do. We gave up the\n> assumption that a partitioned table's children would have consistent\n> system columns the moment we decided to allow foreign tables as\n> partition leaf tables; and it's only going to get worse as alternate\n> table AMs become more of a thing.\n\nAgreed.\n\n\n> So now I'm thinking the right thing to be working on is banning access\n> to system columns via partition parent tables (other than tableoid,\n> which ought to work).\n\nAnd ctid, I assume? While I have some hope for generalizing the\nrepresentation of tids at some point, I don't think it's realistic that\nwe'd actually get rid of them anytime soon.\n\n\n> I'm not quite sure whether the right way to do that is just not\n> make pg_attribute entries for system columns of partitioned tables,\n> as we do for views; or make them but have a code-driven error if\n> you try to use one in a query. The second way is uglier, but\n> it should allow a more on-point error message.\n\nI'm leaning towards the approach of not having the pg_attribute entries\n- it seems not great to have pg_attribute entries for columns that\nactually cannot be queried. I don't think we can expect tooling querying\nthe catalogs to understand that.\n\n\n> OTOH, if we start to have different sets of system columns for\n> different table AMs, we can't have partitioned tables covering all of\n> those sets.\n\nOne could even imagine partition root specific system columns...\n\nIf we wanted AMs to actually be able to do introduce their own set of\nsystem columns we'd need to change their representation to some\ndegree.\n\nISTM that the minimal changes needed would be to reorder sysattr.h to\nhave TableOidAttributeNumber be -2 (and keep\nSelfItemPointerAttributeNumber as -1). And then slowly change all code\nto not reference FirstLowInvalidHeapAttributeNumber - which seems like a\n*substantial* amount of effort, due to all the shifting of AttrNumber by\nFirstLowInvalidHeapAttributeNumber to be able to represent system\ncolumns in bitmaps.\n\nBut perhaps that's the wrong direction, and we instead should work\ntowards *removing* system columns besides tableoid and ctid? At least\nthe way they work in heapam doesn't really make xmin,xmax,cmin,cmax\nparticularly useful. Wild speculation: Perhaps we ought to have some\nparse-analysis-time handling for AM specific functions that return\nadditional information about a row, and that are evaluated directly\nabove (or even in) tableams?\n\n\n> In the meantime, if the back branches fail with something like\n> \"virtual tuple table slot does not have system attributes\" when\n> trying to do this, that's not great but I'm not sure we should\n> be putting effort into improving it.\n\nSeems ok enough.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 21 Apr 2021 12:59:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-21 15:10:09 -0400, Tom Lane wrote:\n>> So now I'm thinking the right thing to be working on is banning access\n>> to system columns via partition parent tables (other than tableoid,\n>> which ought to work).\n\n> And ctid, I assume? While I have some hope for generalizing the\n> representation of tids at some point, I don't think it's realistic that\n> we'd actually get rid of them anytime soon.\n\nHmm, I'd have thought that ctid would be very high on the list of\nthings we don't want to assume are the same for all AMs. Admittedly,\nrefactoring that is going to be a large pain in the rear, but ...\n\nI see that it actually works right now because slot_getsysattr\nspecial-cases both tableoid and ctid, but I have a hard time\nbelieving that that's a long-term answer.\n\n> One could even imagine partition root specific system columns...\n\nYeah. As I think about this, I recall a previous discussion where\nwe suggested that maybe partitioned tables could have a subset\nof system columns, whereupon all their children would be required\nto support (at least) those system columns. But that would have\nto be user-controllable, so a lot of infrastructure would need to\nbe built for it. For the moment I'd be satisfied with a fixed\ndecision that only tableoid is treated that way.\n\n> ISTM that the minimal changes needed would be to reorder sysattr.h to\n> have TableOidAttributeNumber be -2 (and keep\n> SelfItemPointerAttributeNumber as -1). And then slowly change all code\n> to not reference FirstLowInvalidHeapAttributeNumber - which seems like a\n> *substantial* amount of effort, due to all the shifting of AttrNumber by\n> FirstLowInvalidHeapAttributeNumber to be able to represent system\n> columns in bitmaps.\n\nGetting rid of FirstLowInvalidHeapAttributeNumber seems like nearly\na nonstarter. I'd be more inclined to reduce it by one or two so\nas to leave some daylight for AMs that want system columns different\nfrom the usual set. I don't feel any urgency about renumbering the\nexisting columns --- the value of that vs. the risk of breaking\nthings doesn't seem like a great tradeoff.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Apr 2021 16:51:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-21 16:51:42 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-04-21 15:10:09 -0400, Tom Lane wrote:\n> >> So now I'm thinking the right thing to be working on is banning access\n> >> to system columns via partition parent tables (other than tableoid,\n> >> which ought to work).\n> \n> > And ctid, I assume? While I have some hope for generalizing the\n> > representation of tids at some point, I don't think it's realistic that\n> > we'd actually get rid of them anytime soon.\n> \n> Hmm, I'd have thought that ctid would be very high on the list of\n> things we don't want to assume are the same for all AMs. Admittedly,\n> refactoring that is going to be a large pain in the rear, but ...\n\nI don't really see us getting rid of something like ctid as a generic\nconcept across AMs - there's just too many places that need a way to\nreference a specific tuple. However, I think we ought to change how much\ncode outside of AMs know about what tids mean. And, although that's a\nsignificant lift on its own, we ought to make at least the generic\nrepresentation variable width.\n\n\n> I see that it actually works right now because slot_getsysattr\n> special-cases both tableoid and ctid, but I have a hard time\n> believing that that's a long-term answer.\n\nNot fully responsive to this: I've previously wondered if we ought to\nhandle tableoid at parse-analysis or expression simplification time\ninstead of runtime. It's not really something that needs runtime\nevaluation. But it's also not clear it's worth changing anything...\n\n\n> > One could even imagine partition root specific system columns...\n> \n> Yeah. As I think about this, I recall a previous discussion where\n> we suggested that maybe partitioned tables could have a subset\n> of system columns, whereupon all their children would be required\n> to support (at least) those system columns. But that would have\n> to be user-controllable, so a lot of infrastructure would need to\n> be built for it. For the moment I'd be satisfied with a fixed\n> decision that only tableoid is treated that way.\n\nI don't really see a convincing usecase to add this kind of complication\nwith the current set of columns. Outside of tableoid and ctid every use\nof system attributes was either wrong, or purely for debugging /\nintrospection, where restricting it to actual tables doesn't seem like a\nproblem.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 21 Apr 2021 14:12:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-21 16:51:42 -0400, Tom Lane wrote:\n>> Hmm, I'd have thought that ctid would be very high on the list of\n>> things we don't want to assume are the same for all AMs. Admittedly,\n>> refactoring that is going to be a large pain in the rear, but ...\n\n> I don't really see us getting rid of something like ctid as a generic\n> concept across AMs - there's just too many places that need a way to\n> reference a specific tuple. However, I think we ought to change how much\n> code outside of AMs know about what tids mean. And, although that's a\n> significant lift on its own, we ought to make at least the generic\n> representation variable width.\n\nIt seems like it might not be that hard to convert ctid generically\ninto a uint64, where heaps and heap-related indexes only use 6 bytes\nof it. Variable-width I agree would be a very big complication added\non top, and I'm not quite convinced that we need it.\n\n> Not fully responsive to this: I've previously wondered if we ought to\n> handle tableoid at parse-analysis or expression simplification time\n> instead of runtime. It's not really something that needs runtime\n> evaluation. But it's also not clear it's worth changing anything...\n\nIf you do \"RETURNING tableoid\" in DML on a partitioned table, or on a\ntraditional-inheritance hierarchy, you get the OID of the particular\npartition that was touched (indeed, that's the whole point of the\nfeature, IIRC). Don't really see how that could be implemented\nearlier than runtime. Also don't see where the win would be, TBH.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Apr 2021 17:38:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-21 17:38:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I don't really see us getting rid of something like ctid as a generic\n> > concept across AMs - there's just too many places that need a way to\n> > reference a specific tuple. However, I think we ought to change how much\n> > code outside of AMs know about what tids mean. And, although that's a\n> > significant lift on its own, we ought to make at least the generic\n> > representation variable width.\n>\n> It seems like it might not be that hard to convert ctid generically\n> into a uint64, where heaps and heap-related indexes only use 6 bytes\n> of it.\n\nYep.\n\n\n> Variable-width I agree would be a very big complication added on top,\n> and I'm not quite convinced that we need it.\n\nI can see three (related) major cases where variable width tids would be\nquite useful:\n1) Creating an index-oriented-table AM would harder/more\n limited with just an 8 byte tid\n2) Supporting \"indirect\" indexes (i.e. indexes pointing to a primary\n key, thereby being much cheaper to maintain when there are updates),\n would require the primary key to map to an 8 byte integer.\n3) Global indexes would be a lot easier if we had variable width tids\n (but other ways of addressing the issue are possible).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 21 Apr 2021 14:48:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi guys, Are you going to completely remove xmin from partitioned tables? So it will not work in select ..., xmin from partitioned_table ? For us as users of Postgres (not knowing internal details) it is strange that there are some problems with getting current transaction and saving it as usual to xmin... What about first adding a \"rowversion\" type like in SQL Server for optimistic concurrency? I would remind that many code use xmin and would no be able to work with partitioned table...https://www.npgsql.org/efcore/modeling/concurrency.html -- С уважением, Павел 22.04.2021, 00:48, \"Andres Freund\" <andres@anarazel.de>:Hi,On 2021-04-21 17:38:53 -0400, Tom Lane wrote: Andres Freund <andres@anarazel.de> writes: > I don't really see us getting rid of something like ctid as a generic > concept across AMs - there's just too many places that need a way to > reference a specific tuple. However, I think we ought to change how much > code outside of AMs know about what tids mean. And, although that's a > significant lift on its own, we ought to make at least the generic > representation variable width. It seems like it might not be that hard to convert ctid generically into a uint64, where heaps and heap-related indexes only use 6 bytes of it.Yep. Variable-width I agree would be a very big complication added on top, and I'm not quite convinced that we need it.I can see three (related) major cases where variable width tids would bequite useful:1) Creating an index-oriented-table AM would harder/more limited with just an 8 byte tid2) Supporting \"indirect\" indexes (i.e. indexes pointing to a primary key, thereby being much cheaper to maintain when there are updates), would require the primary key to map to an 8 byte integer.3) Global indexes would be a lot easier if we had variable width tids (but other ways of addressing the issue are possible).Greetings,Andres Freund",
"msg_date": "Thu, 22 Apr 2021 07:30:41 +0300",
"msg_from": "Pavel Biryukov <79166341370@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 4:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Forgot to ask in the last reply: for HEAD, should we consider simply\n> > preventing accessing system columns when querying through a parent\n> > partitioned table, as was also mentioned upthread?\n>\n> Indeed, it seems like this thread is putting a fair amount of work\n> into a goal that we shouldn't even be trying to do. We gave up the\n> assumption that a partitioned table's children would have consistent\n> system columns the moment we decided to allow foreign tables as\n> partition leaf tables; and it's only going to get worse as alternate\n> table AMs become more of a thing. So now I'm thinking the right thing\n> to be working on is banning access to system columns via partition\n> parent tables (other than tableoid, which ought to work).\n\nAccessing both tableoid and ctid works currently. Based on the\ndiscussion, it seems we're not so sure whether that will remain the\ncase for the 2nd going forward.\n\n> I'm not quite sure whether the right way to do that is just not\n> make pg_attribute entries for system columns of partitioned tables,\n> as we do for views; or make them but have a code-driven error if\n> you try to use one in a query. The second way is uglier, but\n> it should allow a more on-point error message. OTOH, if we start\n> to have different sets of system columns for different table AMs,\n> we can't have partitioned tables covering all of those sets.\n\nI tend to agree with Andres that not having any pg_attribute entries\nmay be better.\n\n> In the meantime, if the back branches fail with something like\n> \"virtual tuple table slot does not have system attributes\" when\n> trying to do this, that's not great but I'm not sure we should\n> be putting effort into improving it.\n\nGot it. That sounds like an acceptable compromise.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 22 Apr 2021 21:17:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Apr 22, 2021 at 4:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In the meantime, if the back branches fail with something like\n>> \"virtual tuple table slot does not have system attributes\" when\n>> trying to do this, that's not great but I'm not sure we should\n>> be putting effort into improving it.\n\n> Got it. That sounds like an acceptable compromise.\n\nHm, we're not done with this. Whatever you think of the merits of\nthrowing an implementation-level error, what's actually happening\nin v12 and v13 in the wake of a71cfc56b is that an attempt to do\n\"RETURNING xmin\" works in a non-cross-partition UPDATE, but in\na cross-partition UPDATE it dumps core. What I'm seeing is that\nthe result of the execute_attr_map_slot() call looks like\n\n(gdb) p *(BufferHeapTupleTableSlot*) scanslot\n$3 = {base = {base = {type = T_TupleTableSlot, tts_flags = 8, tts_nvalid = 3, \n tts_ops = 0xa305c0 <TTSOpsBufferHeapTuple>, \n tts_tupleDescriptor = 0x7f1c7798f920, tts_values = 0x2077100, \n tts_isnull = 0x2075fb0, tts_mcxt = 0x2015ea0, tts_tid = {ip_blkid = {\n bi_hi = 0, bi_lo = 0}, ip_posid = 3}, tts_tableOid = 37812}, \n tuple = 0x0, off = 0, tupdata = {t_len = 0, t_self = {ip_blkid = {\n bi_hi = 0, bi_lo = 0}, ip_posid = 0}, t_tableOid = 0, \n t_data = 0x0}}, buffer = 0}\n\nand since base.tuple is NULL, heap_getsysattr dies on either\n\"Assert(tup)\" or a null-pointer dereference.\n\nSo\n\n(1) It seems like this is exposing a shortcoming in the\nmultiple-slot-types logic. It's not hard to understand why the slot would\nlook like this after execute_attr_map_slot does ExecStoreVirtualTuple,\nbut if this is not a legal state for a BufferHeapTuple slot, why didn't\nExecStoreVirtualTuple raise a complaint?\n\n(2) It also seems like we can't use the srcSlot if we want to have\nthe fail-because-its-a-virtual-tuple behavior. I experimented with\ndoing ExecMaterializeSlot on the result of execute_attr_map_slot,\nand that stops the crash, but then we're returning garbage values\nof xmin etc, which does not seem good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Apr 2021 14:21:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-22 14:21:05 -0400, Tom Lane wrote:\n> Hm, we're not done with this. Whatever you think of the merits of\n> throwing an implementation-level error, what's actually happening\n> in v12 and v13 in the wake of a71cfc56b is that an attempt to do\n> \"RETURNING xmin\" works in a non-cross-partition UPDATE, but in\n> a cross-partition UPDATE it dumps core. What I'm seeing is that\n> the result of the execute_attr_map_slot() call looks like\n> \n> (gdb) p *(BufferHeapTupleTableSlot*) scanslot\n> $3 = {base = {base = {type = T_TupleTableSlot, tts_flags = 8, tts_nvalid = 3, \n> tts_ops = 0xa305c0 <TTSOpsBufferHeapTuple>, \n> tts_tupleDescriptor = 0x7f1c7798f920, tts_values = 0x2077100, \n> tts_isnull = 0x2075fb0, tts_mcxt = 0x2015ea0, tts_tid = {ip_blkid = {\n> bi_hi = 0, bi_lo = 0}, ip_posid = 3}, tts_tableOid = 37812}, \n> tuple = 0x0, off = 0, tupdata = {t_len = 0, t_self = {ip_blkid = {\n> bi_hi = 0, bi_lo = 0}, ip_posid = 0}, t_tableOid = 0, \n> t_data = 0x0}}, buffer = 0}\n> \n> and since base.tuple is NULL, heap_getsysattr dies on either\n> \"Assert(tup)\" or a null-pointer dereference.\n> \n> So\n> \n> (1) It seems like this is exposing a shortcoming in the\n> multiple-slot-types logic. It's not hard to understand why the slot would\n> look like this after execute_attr_map_slot does ExecStoreVirtualTuple,\n> but if this is not a legal state for a BufferHeapTuple slot, why didn't\n> ExecStoreVirtualTuple raise a complaint?\n\nI think it's too useful to support ExecStoreVirtualTuple() in a\nheap[buffer] slot to disallow that. Seems like we should make\ntts_buffer_heap_getsysattr() error out if there's no tuple?\n\n\n> (2) It also seems like we can't use the srcSlot if we want to have\n> the fail-because-its-a-virtual-tuple behavior. I experimented with\n> doing ExecMaterializeSlot on the result of execute_attr_map_slot,\n> and that stops the crash, but then we're returning garbage values\n> of xmin etc, which does not seem good.\n\nGarbage values as in 0's, or random data? Seems like it should be the\nformer?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Apr 2021 11:31:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-22 14:21:05 -0400, Tom Lane wrote:\n>> (1) It seems like this is exposing a shortcoming in the\n>> multiple-slot-types logic. It's not hard to understand why the slot would\n>> look like this after execute_attr_map_slot does ExecStoreVirtualTuple,\n>> but if this is not a legal state for a BufferHeapTuple slot, why didn't\n>> ExecStoreVirtualTuple raise a complaint?\n\n> I think it's too useful to support ExecStoreVirtualTuple() in a\n> heap[buffer] slot to disallow that. Seems like we should make\n> tts_buffer_heap_getsysattr() error out if there's no tuple?\n\nOK, I could work with that. Shall we spell the error message the\nsame as if it really were a virtual slot, or does it need to be\ndifferent to avoid confusion?\n\n>> (2) It also seems like we can't use the srcSlot if we want to have\n>> the fail-because-its-a-virtual-tuple behavior. I experimented with\n>> doing ExecMaterializeSlot on the result of execute_attr_map_slot,\n>> and that stops the crash, but then we're returning garbage values\n>> of xmin etc, which does not seem good.\n\n> Garbage values as in 0's, or random data? Seems like it should be the\n> former?\n\nI was seeing something like xmin = 128. I think this might be from\nour filling in the header as though for a composite Datum. In any\ncase, I think we need to either deliver the correct answer or throw\nan error; silently returning zeroes wouldn't be good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Apr 2021 14:37:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-22 14:37:26 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-04-22 14:21:05 -0400, Tom Lane wrote:\n> >> (1) It seems like this is exposing a shortcoming in the\n> >> multiple-slot-types logic. It's not hard to understand why the slot would\n> >> look like this after execute_attr_map_slot does ExecStoreVirtualTuple,\n> >> but if this is not a legal state for a BufferHeapTuple slot, why didn't\n> >> ExecStoreVirtualTuple raise a complaint?\n> \n> > I think it's too useful to support ExecStoreVirtualTuple() in a\n> > heap[buffer] slot to disallow that. Seems like we should make\n> > tts_buffer_heap_getsysattr() error out if there's no tuple?\n> \n> OK, I could work with that. Shall we spell the error message the\n> same as if it really were a virtual slot, or does it need to be\n> different to avoid confusion?\n\nHm. Seems like it'd be better to have a distinct error message? Feels\nlike it'll often indicate separate issues whether a non-materialized\nheap slot or a virtual slot has its system columns accessed.\n\n\n> >> (2) It also seems like we can't use the srcSlot if we want to have\n> >> the fail-because-its-a-virtual-tuple behavior. I experimented with\n> >> doing ExecMaterializeSlot on the result of execute_attr_map_slot,\n> >> and that stops the crash, but then we're returning garbage values\n> >> of xmin etc, which does not seem good.\n> \n> > Garbage values as in 0's, or random data? Seems like it should be the\n> > former?\n> \n> I was seeing something like xmin = 128. I think this might be from\n> our filling in the header as though for a composite Datum.\n\nI was wondering if we're not initializing memory somewhere were we\nshould...\n\n\n> In any case, I think we need to either deliver the correct answer or\n> throw an error; silently returning zeroes wouldn't be good.\n\nIIRC there's some historical precedent in returning 0s in some\ncases. But I don't think we should do that if we can avoid it.\n\nNot entirely clear to me how we'd track whether we have valid system\ncolumn data or not once materialized - which I think is why we\nhistorically had the cases where we returned bogus values.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Apr 2021 11:57:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-22 14:37:26 -0400, Tom Lane wrote:\n>> OK, I could work with that. Shall we spell the error message the\n>> same as if it really were a virtual slot, or does it need to be\n>> different to avoid confusion?\n\n> Hm. Seems like it'd be better to have a distinct error message? Feels\n> like it'll often indicate separate issues whether a non-materialized\n> heap slot or a virtual slot has its system columns accessed.\n\nAfter thinking about it for a bit, I'm inclined to promote this to\na user-facing error, and have all the slot types report\n\n\tereport(ERROR,\n\t (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n\t errmsg(\"cannot retrieve a system column in this context\")));\n\nwhich is at least somewhat intelligible to end users. A developer\ntrying to figure out why it happened would resort to \\errverbose or\nmore likely gdb in any case, so the lack of specificity doesn't\nseem like a problem.\n\n> Not entirely clear to me how we'd track whether we have valid system\n> column data or not once materialized - which I think is why we\n> historically had the cases where we returned bogus values.\n\nRight, but with this fix we won't need to materialize.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Apr 2021 15:09:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-22 15:09:59 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-04-22 14:37:26 -0400, Tom Lane wrote:\n> >> OK, I could work with that. Shall we spell the error message the\n> >> same as if it really were a virtual slot, or does it need to be\n> >> different to avoid confusion?\n> \n> > Hm. Seems like it'd be better to have a distinct error message? Feels\n> > like it'll often indicate separate issues whether a non-materialized\n> > heap slot or a virtual slot has its system columns accessed.\n> \n> After thinking about it for a bit, I'm inclined to promote this to\n> a user-facing error, and have all the slot types report\n> \n> \tereport(ERROR,\n> \t (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> \t errmsg(\"cannot retrieve a system column in this context\")));\n\nWFM.\n\n\n> which is at least somewhat intelligible to end users. A developer\n> trying to figure out why it happened would resort to \\errverbose or\n> more likely gdb in any case, so the lack of specificity doesn't\n> seem like a problem.\n\nI'm wondering if it'd be a good idea to add TupleTableSlotOps.name,\nwhich we then could include in error messages without needing to end up\nwith per-slot-type code... But that's probably a separate project from\nadding the error above.\n\n\n> > Not entirely clear to me how we'd track whether we have valid system\n> > column data or not once materialized - which I think is why we\n> > historically had the cases where we returned bogus values.\n> \n> Right, but with this fix we won't need to materialize.\n\nIn this path - but it does seem mildly bothersome that we might do the\nwrong thing in other paths. If we at least returned something halfway\nsensible (e.g. NULL) instead of 128... I guess we could track whether\nthe tuple originated externally, or whether it's from a materialized\nvirtual tuple...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Apr 2021 12:54:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-22 15:09:59 -0400, Tom Lane wrote:\n>> After thinking about it for a bit, I'm inclined to promote this to\n>> a user-facing error, and have all the slot types report\n>> \tereport(ERROR,\n>> \t (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> \t errmsg(\"cannot retrieve a system column in this context\")));\n\n> WFM.\n\nOK, I'll go make that happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Apr 2021 16:28:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 3:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Apr 22, 2021 at 4:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> In the meantime, if the back branches fail with something like\n> >> \"virtual tuple table slot does not have system attributes\" when\n> >> trying to do this, that's not great but I'm not sure we should\n> >> be putting effort into improving it.\n>\n> > Got it. That sounds like an acceptable compromise.\n>\n> Hm, we're not done with this. Whatever you think of the merits of\n> throwing an implementation-level error, what's actually happening\n> in v12 and v13 in the wake of a71cfc56b is that an attempt to do\n> \"RETURNING xmin\" works in a non-cross-partition UPDATE, but in\n> a cross-partition UPDATE it dumps core.\n\nThanks for fixing this.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Apr 2021 10:05:46 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: posgres 12 bug (partitioned table)"
}
] |
[
{
"msg_contents": "To let users know what kind of character set\ncan be used add examples and a link to --encoding option.\n\nThanks,\nDong wook",
"msg_date": "Thu, 4 Jun 2020 22:20:01 +0900",
"msg_from": "=?UTF-8?B?7J2064+Z7Jqx?= <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] pg_dump: Add example and link for --encoding option"
},
{
"msg_contents": "I've modified my previous patch because it linked the wrong document so I\nfixed it. and I add a highlight to the encoding list for readability.\n\nWhat do you think about this change?\n\nThanks,\nDong Wook\n\n2020년 6월 4일 (목) 오후 10:20, 이동욱 <sh95119@gmail.com>님이 작성:\n\n> To let users know what kind of character set\n> can be used add examples and a link to --encoding option.\n>\n> Thanks,\n> Dong wook\n>",
"msg_date": "Fri, 5 Jun 2020 21:45:52 +0900",
"msg_from": "=?UTF-8?B?7J2064+Z7Jqx?= <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_dump: Add example and link for --encoding option"
},
{
"msg_contents": "2020년 6월 5일 (금) 오후 9:45, 이동욱 <sh95119@gmail.com>님이 작성:\n\n> I've modified my previous patch because it linked the wrong document so I\n> fixed it. and I add a highlight to the encoding list for readability.\n>\n> What do you think about this change?\n>\n> Thanks,\n> Dong Wook\n>\n> 2020년 6월 4일 (목) 오후 10:20, 이동욱 <sh95119@gmail.com>님이 작성:\n>\n>> To let users know what kind of character set\n>> can be used add examples and a link to --encoding option.\n>>\n>> Thanks,\n>> Dong wook\n>>\n>",
"msg_date": "Fri, 12 Jun 2020 17:55:35 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": false,
"msg_subject": "Fwd: [PATCH] pg_dump: Add example and link for --encoding option"
},
{
"msg_contents": "On 2020-06-05 14:45, 이동욱 wrote:\n> I've modified my previous patch because it linked the wrong document so \n> I fixed it. and I add a highlight to the encoding list for readability.\n> \n> What do you think about this change?\n\nThe wording in your patch needed a bit of editing but adding a link to \nthe supported encodings seems sensible. I have committed it with a new \nwording.\n\n> \n> Thanks,\n> Dong Wook\n> \n> 2020년 6월 4일 (목) 오후 10:20, 이동욱 <sh95119@gmail.com \n> <mailto:sh95119@gmail.com>>님이 작성:\n> \n> To let users know what kind of character set\n> can be used add examples and a link to --encoding option.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 11 Jul 2020 13:55:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: Add example and link for --encoding option"
}
] |
[
{
"msg_contents": "Hi,\n\nAfter conferring, the PostgreSQL 13 RMT[1] has decided that it is time\nto create the REL_13_STABLE branch. Tom has volunteered to create the\nbranch this Sunday (2020-06-07).\n\nPlease let us know if you have any questions.\n\nThanks,\n\nAlvaro, Peter, Jonathan\n\n[1] https://wiki.postgresql.org/wiki/Release_Management_Team",
"msg_date": "Thu, 4 Jun 2020 14:47:54 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "REL_13_STABLE Branch"
}
] |
[
{
"msg_contents": "This is a patch to make use of RELKIND_HAS_STORAGE() where appropriate, \ninstead of listing out the relkinds individually. No behavior change is \nintended.\n\nThis was originally part of the patch from [0], but it seems worth \nmoving forward independently.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/dc35a398-37d0-75ce-07ea-1dd71d98f8ec%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 5 Jun 2020 11:05:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Make more use of RELKIND_HAS_STORAGE()"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> This is a patch to make use of RELKIND_HAS_STORAGE() where appropriate, \n> instead of listing out the relkinds individually. No behavior change is \n> intended.\n> This was originally part of the patch from [0], but it seems worth \n> moving forward independently.\n\nPasses eyeball examination. I did not try to search for other places\nwhere RELKIND_HAS_STORAGE should be used.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jun 2020 12:05:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Make more use of RELKIND_HAS_STORAGE()"
},
{
"msg_contents": "On 2020-06-05 18:05, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> This is a patch to make use of RELKIND_HAS_STORAGE() where appropriate,\n>> instead of listing out the relkinds individually. No behavior change is\n>> intended.\n>> This was originally part of the patch from [0], but it seems worth\n>> moving forward independently.\n> \n> Passes eyeball examination. I did not try to search for other places\n> where RELKIND_HAS_STORAGE should be used.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Jun 2020 09:16:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Make more use of RELKIND_HAS_STORAGE()"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 09:16:04AM +0200, Peter Eisentraut wrote:\n> committed\n\nYeah!\n--\nMichael",
"msg_date": "Fri, 12 Jun 2020 16:22:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make more use of RELKIND_HAS_STORAGE()"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16481\nLogged by: Fabio Vianello\nEmail address: fabio.vianello@salvagninigroup.com\nPostgreSQL version: 12.3\nOperating system: Windows 10\nDescription: \n\nAbout the bug BUG #15293, on PostgreSQL version 10.4 and 11.2 as describe\nbelow, we found the same issue on the PostgreSQL version 12.3.\r\n\r\nDo you think to solve the issue?\r\n\r\nIs it a feature? \r\nBecasue in the documentation we didn't found any constraint that says that\nwe can not use NOTIFY/LISTEN on logical replication tables.\r\n\r\n\"When using logical replication a stored procedure executed on the replica\nis\r\nunable to use NOTIFY to send messages to other listeners. The stored\r\nprocedure does execute as expected but the pg_notify() doesn't appear to\r\nhave any effect. If an insert is run on the replica side the trigger\r\nexecutes the stored procedure as expected and the NOTIFY correctly\nnotifies\r\nlisteners.\r\n\r\nSteps to Reproduce:\r\nSet up Master:\r\nCREATE TABLE test (id SERIAL PRIMARY KEY, msg TEXT NOT NULL);\r\nCREATE PUBLICATION testpub FOR TABLE test;\r\n\r\nSet up Replica:\r\nCREATE TABLE test (id SERIAL PRIMARY KEY, msg TEXT NOT NULL);\r\nCREATE SUBSCRIPTION testsub CONNECTION 'host=192.168.0.136 user=test\r\npassword=test' PUBLICATION testpub;\r\nCREATE OR REPLACE FUNCTION notify_channel() RETURNS trigger AS $$\r\nBEGIN\r\nRAISE LOG 'Notify Triggered';\r\nPERFORM pg_notify('testchannel', 'Testing');\r\nRETURN NEW;\r\nEND;\r\n$$ LANGUAGE 'plpgsql';\r\nDROP TRIGGER queue_insert ON TEST;\r\nCREATE TRIGGER queue_insert AFTER INSERT ON test FOR EACH ROW EXECUTE\r\nPROCEDURE notify_channel();\r\nALTER TABLE test ENABLE ALWAYS TRIGGER queue_insert;\r\nLISTEN testchannel;\r\n\r\nRun the following insert on the master:\r\nINSERT INTO test (msg) VALUES ('test');\r\n\r\nIn postgresql-10-main.log I get the following:\r\n2018-07-24 07:45:15.705 EDT [6701] LOG: 00000: Notify Triggered\r\n2018-07-24 07:45:15.705 EDT [6701] CONTEXT: PL/pgSQL function\r\nnotify_channel() line 3 at RAISE\r\n2018-07-24 07:45:15.705 EDT [6701] LOCATION: exec_stmt_raise,\r\npl_exec.c:3337\r\n\r\nBut no listeners receive the message. However if an insert is run directly\r\non the replica:\r\nINSERT INTO test VALUES (99999, 'test');\r\n\r\nINSERT 0 1\r\nAsynchronous notification \"testchannel\" with payload \"Testing\" received\nfrom\r\nserver process with PID 6701.\r\nAsynchronous notification \"testchannel\" with payload \"Testing\" received\nfrom\r\nserver process with PID 6701.\r\nAsynchronous notification \"testchannel\" with payload \"Testing\" received\nfrom\r\nserver process with PID 6701.\r\nAsynchronous notification \"testchannel\" with payload \"Testing\" received\nfrom\r\nserver process with PID 6701.\r\nAsynchronous notification \"testchannel\" with payload \"Testing\" received\nfrom\r\nserver process with PID 6701.\r\nAsynchronous notification \"testchannel\" with payload \"Testing\" received\nfrom\r\nserver process with PID 9992.\r\n\r\nBacked up notifications are received for previous NOTIFY's.\"",
"msg_date": "Fri, 05 Jun 2020 11:05:14 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #16481: Stored Procedure Triggered by Logical Replication is\n Unable to use Notification Events"
},
{
"msg_contents": "Hello.\n\nIt seems to me a bug.\n\nAt Fri, 05 Jun 2020 11:05:14 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in \n> The following bug has been logged on the website:\n> \n> Bug reference: 16481\n> Logged by: Fabio Vianello\n> Email address: fabio.vianello@salvagninigroup.com\n> PostgreSQL version: 12.3\n> Operating system: Windows 10\n> Description: \n> \n> About the bug BUG #15293, on PostgreSQL version 10.4 and 11.2 as describe\n> below, we found the same issue on the PostgreSQL version 12.3.\n\nThe HEAD behaves the same way.\n\n> Is it a feature? \n> Becasue in the documentation we didn't found any constraint that says that\n> we can not use NOTIFY/LISTEN on logical replication tables.\n> \n> \"When using logical replication a stored procedure executed on the replica\n> is\n> unable to use NOTIFY to send messages to other listeners. The stored\n> procedure does execute as expected but the pg_notify() doesn't appear to\n> have any effect. If an insert is run on the replica side the trigger\n> executes the stored procedure as expected and the NOTIFY correctly\n> notifies\n> listeners.\n\nThe message is actually queued, but logical replication worker doesn't\nsignal that to listener backends. If any ordinary session sent a\nmessage to the same listener after that, both messages would be shown\nat once.\n\nThat can be fixed by calling ProcessCompletedNotifies() in\napply_handle_commit. The function has a code to write out\nnotifications to connected clients but it doesn't nothing on logical\nreplication workers.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 08 Jun 2020 17:27:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16481: Stored Procedure Triggered by Logical Replication\n is Unable to use Notification Events"
},
{
"msg_contents": "Hi Kyotaro Horiguchi, thanks for you helps.\r\nWe have a question about the bug. Why there isn't any solution in the HEAD?\r\n\r\nThis bug last since 10.4 version and I can't understand why it is not fixed in the HEAD yet.\r\n\r\nBR.\r\nFabio Vianello.\r\n\r\n\r\n-----Original Message-----\r\nFrom: Kyotaro Horiguchi [mailto:horikyota.ntt@gmail.com]\r\nSent: lunedì 8 giugno 2020 10:28\r\nTo: Vianello Fabio <fabio.vianello@salvagninigroup.com>; pgsql-bugs@lists.postgresql.org; pgsql-hackers@lists.postgresql.org\r\nSubject: Re: BUG #16481: Stored Procedure Triggered by Logical Replication is Unable to use Notification Events\r\n\r\nHello.\r\n\r\nIt seems to me a bug.\r\n\r\nAt Fri, 05 Jun 2020 11:05:14 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in\r\n> The following bug has been logged on the website:\r\n>\r\n> Bug reference: 16481\r\n> Logged by: Fabio Vianello\r\n> Email address: fabio.vianello@salvagninigroup.com\r\n> PostgreSQL version: 12.3\r\n> Operating system: Windows 10\r\n> Description:\r\n>\r\n> About the bug BUG #15293, on PostgreSQL version 10.4 and 11.2 as\r\n> describe below, we found the same issue on the PostgreSQL version 12.3.\r\n\r\nThe HEAD behaves the same way.\r\n\r\n> Is it a feature?\r\n> Becasue in the documentation we didn't found any constraint that says\r\n> that we can not use NOTIFY/LISTEN on logical replication tables.\r\n>\r\n> \"When using logical replication a stored procedure executed on the\r\n> replica is unable to use NOTIFY to send messages to other listeners.\r\n> The stored procedure does execute as expected but the pg_notify()\r\n> doesn't appear to have any effect. If an insert is run on the replica\r\n> side the trigger executes the stored procedure as expected and the\r\n> NOTIFY correctly notifies listeners.\r\n\r\nThe message is actually queued, but logical replication worker doesn't signal that to listener backends. If any ordinary session sent a message to the same listener after that, both messages would be shown at once.\r\n\r\nThat can be fixed by calling ProcessCompletedNotifies() in apply_handle_commit. The function has a code to write out notifications to connected clients but it doesn't nothing on logical replication workers.\r\n\r\nregards.\r\n\r\n--\r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\nSALVAGNINI ITALIA S.p.A.\r\nVia Guido Salvagnini, 51 - IT - 36040 Sarego (VI)\r\nT. +39 0444 725111 | F. +39 0444 43 6404\r\nSocietà a socio unico - Attività direz. e coord.: Salvagnini Holding S.p.A.\r\nClicca qui<https://www.salvagninigroup.com/company-information> per le informazioni societarie\r\nsalvagninigroup.com<https://www.salvagninigroup.com> | salvagnini.it<http://www.salvagnini.it>\r\n\r\n\r\nLe informazioni trasmesse sono destinate esclusivamente alla persona o alla società in indirizzo e sono da intendersi confidenziali e riservate. Ogni trasmissione, inoltro, diffusione o altro uso di queste informazioni a persone o società differenti dal destinatario è proibita. Se avete ricevuto questa comunicazione per errore, per favore contattate il mittente e cancellate le informazioni da ogni computer. Questa casella di posta elettronica è riservata esclusivamente all’invio ed alla ricezione di messaggi aziendali inerenti all’attività lavorativa e non è previsto né autorizzato l’utilizzo per fini personali. Pertanto i messaggi in uscita e quelli di risposta in entrata verranno trattati quali messaggi aziendali e soggetti alla ordinaria gestione disposta con proprio disciplinare dall’azienda e, di conseguenza, eventualmente anche alla lettura da parte di persone diverse dall’intestatario della casella.\r\n\r\nAny information herein transmitted only concerns the person or the company named in the address and is deemed to be confidential. It is strictly forbidden to transmit, post, forward or otherwise use said information to anyone other than the recipient. If you have received this message by mistake, please contact the sender and delete any relevant information from your computer. This mailbox is only meant for sending and receiving messages pertaining business matters and any other use for personal purposes is forbidden and unauthorized. Therefore, any email sent and received will be handled as ordinary business messages and subject to the company's own rules, and may thus be read also by people other than the user named in the mailbox address.\r\n",
"msg_date": "Mon, 8 Jun 2020 09:13:44 +0000",
"msg_from": "Vianello Fabio <fabio.vianello@salvagninigroup.com>",
"msg_from_op": false,
"msg_subject": "RE: BUG #16481: Stored Procedure Triggered by Logical Replication is\n Unable to use Notification Events"
},
{
"msg_contents": "On Mon, 8 Jun 2020 at 05:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> That can be fixed by calling ProcessCompletedNotifies() in\n> apply_handle_commit. The function has a code to write out\n> notifications to connected clients but it doesn't nothing on logical\n> replication workers.\n>\n>\nThis bug was already reported some time ago (#15293) but it slipped through\nthe\ncracks. I don't think you should simply call ProcessCompletedNotifies [1].\n\n[1] https://www.postgresql.org/message-id/13844.1532468610%40sss.pgh.pa.us\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Mon, 8 Jun 2020 at 05:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\nThat can be fixed by calling ProcessCompletedNotifies() in\napply_handle_commit. The function has a code to write out\nnotifications to connected clients but it doesn't nothing on logical\nreplication workers.\nThis bug was already reported some time ago (#15293) but it slipped through the cracks. I don't think you should simply call ProcessCompletedNotifies [1].[1] https://www.postgresql.org/message-id/13844.1532468610%40sss.pgh.pa.us-- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 8 Jun 2020 07:51:18 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16481: Stored Procedure Triggered by Logical Replication is\n Unable to use Notification Events"
},
{
"msg_contents": "Is PostgreSQL a serious product? For me the answer is \"NO\". A product with a bug that last for years and the community knows.\r\nIt is not serious.\r\n\r\nBR,\r\nFabio.\r\n\r\nFrom: Euler Taveira [mailto:euler.taveira@2ndquadrant.com]\r\nSent: lunedì 8 giugno 2020 12:51\r\nTo: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\nCc: Vianello Fabio <fabio.vianello@salvagninigroup.com>; pgsql-bugs@lists.postgresql.org; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\nSubject: Re: BUG #16481: Stored Procedure Triggered by Logical Replication is Unable to use Notification Events\r\n\r\nOn Mon, 8 Jun 2020 at 05:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com<mailto:horikyota.ntt@gmail.com>> wrote:\r\n\r\nThat can be fixed by calling ProcessCompletedNotifies() in\r\napply_handle_commit. The function has a code to write out\r\nnotifications to connected clients but it doesn't nothing on logical\r\nreplication workers.\r\n\r\nThis bug was already reported some time ago (#15293) but it slipped through the\r\ncracks. I don't think you should simply call ProcessCompletedNotifies [1].\r\n\r\n[1] https://www.postgresql.org/message-id/13844.1532468610%40sss.pgh.pa.us\r\n\r\n\r\n--\r\nEuler Taveira http://www.2ndQuadrant.com/\r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\r\nSALVAGNINI ITALIA S.p.A.\r\nVia Guido Salvagnini, 51 - IT - 36040 Sarego (VI)\r\nT. +39 0444 725111 | F. +39 0444 43 6404\r\nSocietà a socio unico - Attività direz. e coord.: Salvagnini Holding S.p.A.\r\nClicca qui<https://www.salvagninigroup.com/company-information> per le informazioni societarie\r\nsalvagninigroup.com<https://www.salvagninigroup.com> | salvagnini.it<http://www.salvagnini.it>\r\n\r\n\r\nLe informazioni trasmesse sono destinate esclusivamente alla persona o alla società in indirizzo e sono da intendersi confidenziali e riservate. Ogni trasmissione, inoltro, diffusione o altro uso di queste informazioni a persone o società differenti dal destinatario è proibita. Se avete ricevuto questa comunicazione per errore, per favore contattate il mittente e cancellate le informazioni da ogni computer. Questa casella di posta elettronica è riservata esclusivamente all’invio ed alla ricezione di messaggi aziendali inerenti all’attività lavorativa e non è previsto né autorizzato l’utilizzo per fini personali. Pertanto i messaggi in uscita e quelli di risposta in entrata verranno trattati quali messaggi aziendali e soggetti alla ordinaria gestione disposta con proprio disciplinare dall’azienda e, di conseguenza, eventualmente anche alla lettura da parte di persone diverse dall’intestatario della casella.\r\n\r\nAny information herein transmitted only concerns the person or the company named in the address and is deemed to be confidential. It is strictly forbidden to transmit, post, forward or otherwise use said information to anyone other than the recipient. If you have received this message by mistake, please contact the sender and delete any relevant information from your computer. This mailbox is only meant for sending and receiving messages pertaining business matters and any other use for personal purposes is forbidden and unauthorized. Therefore, any email sent and received will be handled as ordinary business messages and subject to the company's own rules, and may thus be read also by people other than the user named in the mailbox address.\r\n\n\n\n\n\n\n\n\n\nIs PostgreSQL a serious product? For me the answer is \"NO\". A product with a bug that last for years and the community knows.\r\n\nIt is not serious.\n \nBR,\nFabio.\n \nFrom: Euler Taveira [mailto:euler.taveira@2ndquadrant.com]\r\n\nSent: lunedì 8 giugno 2020 12:51\nTo: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nCc: Vianello Fabio <fabio.vianello@salvagninigroup.com>; pgsql-bugs@lists.postgresql.org; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: BUG #16481: Stored Procedure Triggered by Logical Replication is Unable to use Notification Events\n \n\n\nOn Mon, 8 Jun 2020 at 05:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n\n\n\r\nThat can be fixed by calling ProcessCompletedNotifies() in\r\napply_handle_commit. The function has a code to write out\r\nnotifications to connected clients but it doesn't nothing on logical\r\nreplication workers.\n\n\n \n\n\nThis bug was already reported some time ago (#15293) but it slipped through the\r\n\r\ncracks. I don't think you should simply call ProcessCompletedNotifies [1].\n\n\n \n\n\n[1] https://www.postgresql.org/message-id/13844.1532468610%40sss.pgh.pa.us\n\n\n \n\n\n \n\n\n-- \n\n\nEuler Taveira \r\nhttp://www.2ndQuadrant.com/\r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n\n\n\n\nSALVAGNINI ITALIA S.p.A. \n\n\n\r\nVia Guido Salvagnini, 51 - IT - 36040 Sarego (VI)\r\nT. +39 0444 725111 | F. +39 0444 43 6404\r\nSocietà a socio unico - Attività direz. e coord.: Salvagnini Holding S.p.A.\nClicca qui per le informazioni societarie\nsalvagninigroup.com |\r\nsalvagnini.it\n\n\n\n\n\r\nLe informazioni trasmesse sono destinate esclusivamente alla persona o alla società in indirizzo e sono da intendersi confidenziali e riservate. Ogni trasmissione, inoltro, diffusione o altro uso di queste informazioni a persone o società differenti dal destinatario\r\n è proibita. Se avete ricevuto questa comunicazione per errore, per favore contattate il mittente e cancellate le informazioni da ogni computer. Questa casella di posta elettronica è riservata esclusivamente all’invio ed alla ricezione di messaggi aziendali\r\n inerenti all’attività lavorativa e non è previsto né autorizzato l’utilizzo per fini personali. Pertanto i messaggi in uscita e quelli di risposta in entrata verranno trattati quali messaggi aziendali e soggetti alla ordinaria gestione disposta con proprio\r\n disciplinare dall’azienda e, di conseguenza, eventualmente anche alla lettura da parte di persone diverse dall’intestatario della casella.\r\n\n\r\nAny information herein transmitted only concerns the person or the company named in the address and is deemed to be confidential. It is strictly forbidden to transmit, post, forward or otherwise use said information to anyone other than the recipient. If you\r\n have received this message by mistake, please contact the sender and delete any relevant information from your computer. This mailbox is only meant for sending and receiving messages pertaining business matters and any other use for personal purposes is forbidden\r\n and unauthorized. Therefore, any email sent and received will be handled as ordinary business messages and subject to the company's own rules, and may thus be read also by people other than the user named in the mailbox address.",
"msg_date": "Tue, 9 Jun 2020 05:59:04 +0000",
"msg_from": "Vianello Fabio <fabio.vianello@salvagninigroup.com>",
"msg_from_op": false,
"msg_subject": "RE: BUG #16481: Stored Procedure Triggered by Logical Replication is\n Unable to use Notification Events"
},
{
"msg_contents": "Hello, Euler.\n\nAt Mon, 8 Jun 2020 07:51:18 -0300, Euler Taveira <euler.taveira@2ndquadrant.com> wrote in \n> On Mon, 8 Jun 2020 at 05:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> \n> >\n> > That can be fixed by calling ProcessCompletedNotifies() in\n> > apply_handle_commit. The function has a code to write out\n> > notifications to connected clients but it doesn't nothing on logical\n> > replication workers.\n> >\n> >\n> This bug was already reported some time ago (#15293) but it slipped through\n> the\n> cracks. I don't think you should simply call ProcessCompletedNotifies [1].\n\nYeah, Thanks for pointing that. I faintly thought of a similar thing\nto the discussion there. Just calling ProcessCompletedNotifies in\napply_handle_commit is actually wrong.\n\nWe can move only SignalBackends() to AtCommit_Notify since\nasyncQueueAdvanceTail() is no longer dependent on the result of\nSignalBackends, but anyway we need to call asyncQueueAdvanceTail in\nAtCommit_Notify and AtAbort_Notify since otherwise the queue cannot be\nshorten while running logical replication. This can slightly defers\ntail-advancing but I think it wouldn't be a significant problem.\n\n> [1] https://www.postgresql.org/message-id/13844.1532468610%40sss.pgh.pa.us\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 09 Jun 2020 15:09:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16481: Stored Procedure Triggered by Logical Replication\n is Unable to use Notification Events"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 12:36 AM Vianello Fabio <\nfabio.vianello@salvagninigroup.com> wrote:\n\n> Is PostgreSQL a serious product? For me the answer is \"NO\". A product with\n> a bug that last for years and the community knows.\n>\n> It is not serious.\n>\n\nIf you are trying to be a troll just go away, we don't need that here. If\nyou are just venting consider that this project is no more or less likely\nto have oversights, incomplete documentation, or a myriad of other human\ninduced issues than any other.\n\nReading the linked bug report the conclusion was \"won't fix - at least not\nright now\". Sure, it probably should have been documented but wasn't. It\nhappens. And given the lack of complaints in the intervening years the\ndecision, to not devote volunteer resources to a marginal feature, seems\nlike the right one. Your active recourse at this point is to either\nconvince a volunteer hacker to take up the cause - which your attitude\ndoesn't help - or pay someone to do it. Or figure out a personal\nwork-around to live with the current reality. Given that there is ongoing\ndiscussion as a result of your report (i.e., the report has merit\nregardless of how it was reported) means you should either meaningfully\ncontribute to the discussion or shut up until they are done - at which\npoint you are back to making a decision. Prompting politely for attention\nif the thread seems to languish without a clear resolution is, IMO,\nacceptable.\n\nDavid J.\n\nOn Tue, Jun 9, 2020 at 12:36 AM Vianello Fabio <fabio.vianello@salvagninigroup.com> wrote:\n\n\nIs PostgreSQL a serious product? For me the answer is \"NO\". A product with a bug that last for years and the community knows.\n\nIt is not serious.If you are trying to be a troll just go away, we don't need that here. If you are just venting consider that this project is no more or less likely to have oversights, incomplete documentation, or a myriad of other human induced issues than any other.Reading the linked bug report the conclusion was \"won't fix - at least not right now\". Sure, it probably should have been documented but wasn't. It happens. And given the lack of complaints in the intervening years the decision, to not devote volunteer resources to a marginal feature, seems like the right one. Your active recourse at this point is to either convince a volunteer hacker to take up the cause - which your attitude doesn't help - or pay someone to do it. Or figure out a personal work-around to live with the current reality. Given that there is ongoing discussion as a result of your report (i.e., the report has merit regardless of how it was reported) means you should either meaningfully contribute to the discussion or shut up until they are done - at which point you are back to making a decision. Prompting politely for attention if the thread seems to languish without a clear resolution is, IMO, acceptable.David J.",
"msg_date": "Tue, 9 Jun 2020 08:10:22 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16481: Stored Procedure Triggered by Logical Replication is\n Unable to use Notification Events"
},
{
"msg_contents": "I think you did follow all the thread. I only ask if it is a bug or not. If it as bug and last for years I understand your point our view but I have my.\r\n\r\n\r\nIf you think that signal a bug is not give a contribution I am astonished.\r\n\r\nImprove PosgreSQL is the target!!!!\r\n\r\n\r\n\r\nBest Regard.\r\n\r\n\r\n\r\nFabio Vianello.\r\n\r\n\r\n\r\nFrom: David G. Johnston [mailto:david.g.johnston@gmail.com]\r\nSent: martedì 9 giugno 2020 17:10\r\nTo: Vianello Fabio <fabio.vianello@salvagninigroup.com>\r\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; pgsql-bugs@lists.postgresql.org; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Euler Taveira <euler.taveira@2ndquadrant.com>\r\nSubject: Re: BUG #16481: Stored Procedure Triggered by Logical Replication is Unable to use Notification Events\r\n\r\nOn Tue, Jun 9, 2020 at 12:36 AM Vianello Fabio <fabio.vianello@salvagninigroup.com<mailto:fabio.vianello@salvagninigroup.com>> wrote:\r\nIs PostgreSQL a serious product? For me the answer is \"NO\". A product with a bug that last for years and the community knows.\r\nIt is not serious.\r\n\r\nIf you are trying to be a troll just go away, we don't need that here. If you are just venting consider that this project is no more or less likely to have oversights, incomplete documentation, or a myriad of other human induced issues than any other.\r\n\r\nReading the linked bug report the conclusion was \"won't fix - at least not right now\". Sure, it probably should have been documented but wasn't. It happens. And given the lack of complaints in the intervening years the decision, to not devote volunteer resources to a marginal feature, seems like the right one. Your active recourse at this point is to either convince a volunteer hacker to take up the cause - which your attitude doesn't help - or pay someone to do it. Or figure out a personal work-around to live with the current reality. Given that there is ongoing discussion as a result of your report (i.e., the report has merit regardless of how it was reported) means you should either meaningfully contribute to the discussion or shut up until they are done - at which point you are back to making a decision. Prompting politely for attention if the thread seems to languish without a clear resolution is, IMO, acceptable.\r\n\r\nDavid J.\r\n\r\n\r\n\r\nSALVAGNINI ITALIA S.p.A.\r\nVia Guido Salvagnini, 51 - IT - 36040 Sarego (VI)\r\nT. +39 0444 725111 | F. +39 0444 43 6404\r\nSocietà a socio unico - Attività direz. e coord.: Salvagnini Holding S.p.A.\r\nClicca qui<https://www.salvagninigroup.com/company-information> per le informazioni societarie\r\nsalvagninigroup.com<https://www.salvagninigroup.com> | salvagnini.it<http://www.salvagnini.it>\r\n\r\n\r\nLe informazioni trasmesse sono destinate esclusivamente alla persona o alla società in indirizzo e sono da intendersi confidenziali e riservate. Ogni trasmissione, inoltro, diffusione o altro uso di queste informazioni a persone o società differenti dal destinatario è proibita. Se avete ricevuto questa comunicazione per errore, per favore contattate il mittente e cancellate le informazioni da ogni computer. Questa casella di posta elettronica è riservata esclusivamente all’invio ed alla ricezione di messaggi aziendali inerenti all’attività lavorativa e non è previsto né autorizzato l’utilizzo per fini personali. Pertanto i messaggi in uscita e quelli di risposta in entrata verranno trattati quali messaggi aziendali e soggetti alla ordinaria gestione disposta con proprio disciplinare dall’azienda e, di conseguenza, eventualmente anche alla lettura da parte di persone diverse dall’intestatario della casella.\r\n\r\nAny information herein transmitted only concerns the person or the company named in the address and is deemed to be confidential. It is strictly forbidden to transmit, post, forward or otherwise use said information to anyone other than the recipient. If you have received this message by mistake, please contact the sender and delete any relevant information from your computer. This mailbox is only meant for sending and receiving messages pertaining business matters and any other use for personal purposes is forbidden and unauthorized. Therefore, any email sent and received will be handled as ordinary business messages and subject to the company's own rules, and may thus be read also by people other than the user named in the mailbox address.\r\n\n\n\n\n\n\n\n\n\nI think you did follow all the thread. I only ask if it is a bug or not. If it as bug and last for years I understand your point our view but I have my.\r\n\n \nIf you think that signal a bug is not give a contribution I am astonished. \nImprove PosgreSQL is the target!!!!\n \nBest Regard.\n \nFabio Vianello.\n \n \n \nFrom: David G. Johnston [mailto:david.g.johnston@gmail.com]\r\n\nSent: martedì 9 giugno 2020 17:10\nTo: Vianello Fabio <fabio.vianello@salvagninigroup.com>\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; pgsql-bugs@lists.postgresql.org; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Euler Taveira <euler.taveira@2ndquadrant.com>\nSubject: Re: BUG #16481: Stored Procedure Triggered by Logical Replication is Unable to use Notification Events\n \n\n\n\nOn Tue, Jun 9, 2020 at 12:36 AM Vianello Fabio <fabio.vianello@salvagninigroup.com> wrote:\n\n\n\n\n\n\nIs PostgreSQL a serious product? For me the answer is \"NO\". A product with a bug that last for years and the community knows.\r\n\nIt is not serious.\n\n\n\n\n \n\n\nIf you are trying to be a troll just go away, we don't need that here. If you are just venting consider that this project is no more or less likely to have oversights, incomplete documentation,\r\n or a myriad of other human induced issues than any other.\n\n\n \n\n\nReading the linked bug report the conclusion was \"won't fix - at least not right now\". Sure, it probably should have been documented but wasn't. It happens. And given the lack of complaints\r\n in the intervening years the decision, to not devote volunteer resources to a marginal feature, seems like the right one. Your active recourse at this point is to either convince a volunteer hacker to take up the cause - which your attitude doesn't help -\r\n or pay someone to do it. Or figure out a personal work-around to live with the current reality. Given that there is ongoing discussion as a result of your report (i.e., the report has merit regardless of how it was reported) means you should either meaningfully\r\n contribute to the discussion or shut up until they are done - at which point you are back to making a decision. Prompting politely for attention if the thread seems to languish without a clear resolution is, IMO, acceptable.\n\n\n \n\n\nDavid J.\n\n\n \n\n\n \n\n\n \n\n\n\n\n\n\n\n\nSALVAGNINI ITALIA S.p.A. \n\n\n\r\nVia Guido Salvagnini, 51 - IT - 36040 Sarego (VI)\r\nT. +39 0444 725111 | F. +39 0444 43 6404\r\nSocietà a socio unico - Attività direz. e coord.: Salvagnini Holding S.p.A.\nClicca qui per le informazioni societarie\nsalvagninigroup.com |\r\nsalvagnini.it\n\n\n\n\n\r\nLe informazioni trasmesse sono destinate esclusivamente alla persona o alla società in indirizzo e sono da intendersi confidenziali e riservate. Ogni trasmissione, inoltro, diffusione o altro uso di queste informazioni a persone o società differenti dal destinatario\r\n è proibita. Se avete ricevuto questa comunicazione per errore, per favore contattate il mittente e cancellate le informazioni da ogni computer. Questa casella di posta elettronica è riservata esclusivamente all’invio ed alla ricezione di messaggi aziendali\r\n inerenti all’attività lavorativa e non è previsto né autorizzato l’utilizzo per fini personali. Pertanto i messaggi in uscita e quelli di risposta in entrata verranno trattati quali messaggi aziendali e soggetti alla ordinaria gestione disposta con proprio\r\n disciplinare dall’azienda e, di conseguenza, eventualmente anche alla lettura da parte di persone diverse dall’intestatario della casella.\r\n\n\r\nAny information herein transmitted only concerns the person or the company named in the address and is deemed to be confidential. It is strictly forbidden to transmit, post, forward or otherwise use said information to anyone other than the recipient. If you\r\n have received this message by mistake, please contact the sender and delete any relevant information from your computer. This mailbox is only meant for sending and receiving messages pertaining business matters and any other use for personal purposes is forbidden\r\n and unauthorized. Therefore, any email sent and received will be handled as ordinary business messages and subject to the company's own rules, and may thus be read also by people other than the user named in the mailbox address.",
"msg_date": "Wed, 10 Jun 2020 06:02:54 +0000",
"msg_from": "Vianello Fabio <fabio.vianello@salvagninigroup.com>",
"msg_from_op": false,
"msg_subject": "RE: BUG #16481: Stored Procedure Triggered by Logical Replication is\n Unable to use Notification Events"
},
{
"msg_contents": "Just to help those who find themselves in the same situation, there is a simple application-level workaround which consists in listening to the notifications in the replica when they are issued by the master and vice versa.\r\n\r\nWe are programming in .NET and we use a code like this:\r\n\r\nCode in the replica side:\r\n\r\n var csb = new NpgsqlConnectionStringBuilder\r\n {\r\n Host = \"master\",\r\n Database = \"MasterDB\",\r\n Port = 5432,\r\n Username = \"postgres\",\r\n Password = \"XXXXXXX\",\r\n\r\n };\r\n var connection = new NpgsqlConnection(csb.ConnectionString);\r\n connection.Open();\r\n using (var command = new NpgsqlCommand(\"listen \\\"Table\\\"\", connection))\r\n {\r\n command.ExecuteNonQuery();\r\n }\r\n connection.Notification += PostgresNotification;\r\n\r\nSo you can listen from the replica every changed raised by a trigger on the master from the replica side on the table \"Table\".\r\n\r\nCREATE TRIGGER master_trigger\r\n AFTER INSERT OR DELETE OR UPDATE\r\n ON public.\"TABLE\"\r\n FOR EACH ROW\r\n EXECUTE PROCEDURE public.master_notice();\r\n\r\nALTER TABLE public.\"Tabele\"\r\n ENABLE ALWAYS TRIGGER master_trigger;\r\n\r\nCREATE FUNCTION public. master_notice ()\r\n RETURNS trigger\r\n LANGUAGE 'plpgsql'\r\n COST 100\r\n VOLATILE NOT LEAKPROOF\r\nAS $BODY$\r\n BEGIN\r\n PERFORM pg_notify('Table', Cast(NEW.\"ID\" as text));\r\n RETURN NEW;\r\n END;\r\n $BODY$;\r\n\r\nALTER FUNCTION public.incupdate1_notice()\r\n OWNER TO postgres;\r\n\r\nI hope that help someone, because the bug last from \"years\". I tried in version 10 11 and 12, so it is present since 2017-10-05 and I can't see any solution on 13 beta.\r\n\r\nBest Regards.\r\nFabio.\r\n\r\n\r\n-----Original Message-----\r\nFrom: Vianello Fabio\r\nSent: lunedì 8 giugno 2020 11:14\r\nTo: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\nCc: pgsql-bugs@lists.postgresql.org; pgsql-hackers@lists.postgresql.org\r\nSubject: RE: BUG #16481: Stored Procedure Triggered by Logical Replication is Unable to use Notification Events\r\n\r\nHi Kyotaro Horiguchi, thanks for you helps.\r\nWe have a question about the bug. Why there isn't any solution in the HEAD?\r\n\r\nThis bug last since 10.4 version and I can't understand why it is not fixed in the HEAD yet.\r\n\r\nBR.\r\nFabio Vianello.\r\n\r\n\r\n-----Original Message-----\r\nFrom: Kyotaro Horiguchi [mailto:horikyota.ntt@gmail.com]\r\nSent: lunedì 8 giugno 2020 10:28\r\nTo: Vianello Fabio <fabio.vianello@salvagninigroup.com>; pgsql-bugs@lists.postgresql.org; pgsql-hackers@lists.postgresql.org\r\nSubject: Re: BUG #16481: Stored Procedure Triggered by Logical Replication is Unable to use Notification Events\r\n\r\nHello.\r\n\r\nIt seems to me a bug.\r\n\r\nAt Fri, 05 Jun 2020 11:05:14 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in\r\n> The following bug has been logged on the website:\r\n>\r\n> Bug reference: 16481\r\n> Logged by: Fabio Vianello\r\n> Email address: fabio.vianello@salvagninigroup.com\r\n> PostgreSQL version: 12.3\r\n> Operating system: Windows 10\r\n> Description:\r\n>\r\n> About the bug BUG #15293, on PostgreSQL version 10.4 and 11.2 as\r\n> describe below, we found the same issue on the PostgreSQL version 12.3.\r\n\r\nThe HEAD behaves the same way.\r\n\r\n> Is it a feature?\r\n> Becasue in the documentation we didn't found any constraint that says\r\n> that we can not use NOTIFY/LISTEN on logical replication tables.\r\n>\r\n> \"When using logical replication a stored procedure executed on the\r\n> replica is unable to use NOTIFY to send messages to other listeners.\r\n> The stored procedure does execute as expected but the pg_notify()\r\n> doesn't appear to have any effect. If an insert is run on the replica\r\n> side the trigger executes the stored procedure as expected and the\r\n> NOTIFY correctly notifies listeners.\r\n\r\nThe message is actually queued, but logical replication worker doesn't signal that to listener backends. If any ordinary session sent a message to the same listener after that, both messages would be shown at once.\r\n\r\nThat can be fixed by calling ProcessCompletedNotifies() in apply_handle_commit. The function has a code to write out notifications to connected clients but it doesn't nothing on logical replication workers.\r\n\r\nregards.\r\n\r\n--\r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\nSALVAGNINI ITALIA S.p.A.\r\nVia Guido Salvagnini, 51 - IT - 36040 Sarego (VI)\r\nT. +39 0444 725111 | F. +39 0444 43 6404\r\nSocietà a socio unico - Attività direz. e coord.: Salvagnini Holding S.p.A.\r\nClicca qui<https://www.salvagninigroup.com/company-information> per le informazioni societarie\r\nsalvagninigroup.com<https://www.salvagninigroup.com> | salvagnini.it<http://www.salvagnini.it>\r\n\r\n\r\nLe informazioni trasmesse sono destinate esclusivamente alla persona o alla società in indirizzo e sono da intendersi confidenziali e riservate. Ogni trasmissione, inoltro, diffusione o altro uso di queste informazioni a persone o società differenti dal destinatario è proibita. Se avete ricevuto questa comunicazione per errore, per favore contattate il mittente e cancellate le informazioni da ogni computer. Questa casella di posta elettronica è riservata esclusivamente all’invio ed alla ricezione di messaggi aziendali inerenti all’attività lavorativa e non è previsto né autorizzato l’utilizzo per fini personali. Pertanto i messaggi in uscita e quelli di risposta in entrata verranno trattati quali messaggi aziendali e soggetti alla ordinaria gestione disposta con proprio disciplinare dall’azienda e, di conseguenza, eventualmente anche alla lettura da parte di persone diverse dall’intestatario della casella.\r\n\r\nAny information herein transmitted only concerns the person or the company named in the address and is deemed to be confidential. It is strictly forbidden to transmit, post, forward or otherwise use said information to anyone other than the recipient. If you have received this message by mistake, please contact the sender and delete any relevant information from your computer. This mailbox is only meant for sending and receiving messages pertaining business matters and any other use for personal purposes is forbidden and unauthorized. Therefore, any email sent and received will be handled as ordinary business messages and subject to the company's own rules, and may thus be read also by people other than the user named in the mailbox address.\r\n",
"msg_date": "Fri, 12 Jun 2020 06:21:01 +0000",
"msg_from": "Vianello Fabio <fabio.vianello@salvagninigroup.com>",
"msg_from_op": false,
"msg_subject": "RE: BUG #16481: Stored Procedure Triggered by Logical Replication is\n Unable to use Notification Events"
}
] |
[
{
"msg_contents": "Hi\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 7c06afd3ea..3b810c0eb4 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -17821,7 +17821,6 @@ SELECT NULLIF(value, '(none)') ...\n 1\n 2\n </programlisting>\n- (2 rows in result)\n </para></entry>\n </row>\n\nRegards\n\nPavel\n\nHidiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgmlindex 7c06afd3ea..3b810c0eb4 100644--- a/doc/src/sgml/func.sgml+++ b/doc/src/sgml/func.sgml@@ -17821,7 +17821,6 @@ SELECT NULLIF(value, '(none)') ... 1 2 </programlisting>- (2 rows in result) </para></entry> </row>RegardsPavel",
"msg_date": "Fri, 5 Jun 2020 13:58:44 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "minor doc fix - garbage in example of result of unnest"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index 7c06afd3ea..3b810c0eb4 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -17821,7 +17821,6 @@ SELECT NULLIF(value, '(none)') ...\n> 1\n> 2\n> </programlisting>\n> - (2 rows in result)\n> </para></entry>\n> </row>\n\nThat's not \"garbage\", I put it there intentionally.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jun 2020 09:56:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: minor doc fix - garbage in example of result of unnest"
},
{
"msg_contents": "On Fri, Jun 05, 2020 at 09:56:54AM -0400, Tom Lane wrote:\n>Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n>> index 7c06afd3ea..3b810c0eb4 100644\n>> --- a/doc/src/sgml/func.sgml\n>> +++ b/doc/src/sgml/func.sgml\n>> @@ -17821,7 +17821,6 @@ SELECT NULLIF(value, '(none)') ...\n>> 1\n>> 2\n>> </programlisting>\n>> - (2 rows in result)\n>> </para></entry>\n>> </row>\n>\n>That's not \"garbage\", I put it there intentionally.\n>\n\nWhy is it outside the <programlisting> though? Also, the next unnest\nexample does not include the number of rows - why the difference?\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Jun 2020 17:38:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: minor doc fix - garbage in example of result of unnest"
},
{
"msg_contents": "On Fri, Jun 5, 2020 at 8:38 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Fri, Jun 05, 2020 at 09:56:54AM -0400, Tom Lane wrote:\n> >Pavel Stehule <pavel.stehule@gmail.com> writes:\n> >> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> >> index 7c06afd3ea..3b810c0eb4 100644\n> >> --- a/doc/src/sgml/func.sgml\n> >> +++ b/doc/src/sgml/func.sgml\n> >> @@ -17821,7 +17821,6 @@ SELECT NULLIF(value, '(none)') ...\n> >> 1\n> >> 2\n> >> </programlisting>\n> >> - (2 rows in result)\n> >> </para></entry>\n> >> </row>\n> >\n> >That's not \"garbage\", I put it there intentionally.\n> >\n>\n> Why is it outside the <programlisting> though? Also, the next unnest\n> example does not include the number of rows - why the difference?\n>\n\nMore generally I think the final unnest listing is probably the best\npresentation for a set-returning function, and the one under discussion\nshould be consistent with it. The function reference documentation doesn't\nseem like a place to introduce strictly psql output variations like /pset\nfooter on|off. A set-returning function should be displayed in the example\noutput as a structured table like the last unnest example. I don't think\nit's necessary for \"single column\" sets to be an exception. Since the\nexamples convert null to (none) there isn't even whitespace concerns in the\noutput where you want to display a bottom boundary for clarity.\n\nRelated, It seems arbitrary that the case-related full block examples do\nnot have row counts but the aggregate full block examples do.\n\nI don't see where having the row counts in the output adds clarity and it\njust more text for the reader to mentally filter out.\n\nDavid J.\n\nOn Fri, Jun 5, 2020 at 8:38 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Fri, Jun 05, 2020 at 09:56:54AM -0400, Tom Lane wrote:\n>Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n>> index 7c06afd3ea..3b810c0eb4 100644\n>> --- a/doc/src/sgml/func.sgml\n>> +++ b/doc/src/sgml/func.sgml\n>> @@ -17821,7 +17821,6 @@ SELECT NULLIF(value, '(none)') ...\n>> 1\n>> 2\n>> </programlisting>\n>> - (2 rows in result)\n>> </para></entry>\n>> </row>\n>\n>That's not \"garbage\", I put it there intentionally.\n>\n\nWhy is it outside the <programlisting> though? Also, the next unnest\nexample does not include the number of rows - why the difference?More generally I think the final unnest listing is probably the best presentation for a set-returning function, and the one under discussion should be consistent with it. The function reference documentation doesn't seem like a place to introduce strictly psql output variations like /pset footer on|off. A set-returning function should be displayed in the example output as a structured table like the last unnest example. I don't think it's necessary for \"single column\" sets to be an exception. Since the examples convert null to (none) there isn't even whitespace concerns in the output where you want to display a bottom boundary for clarity.Related, It seems arbitrary that the case-related full block examples do not have row counts but the aggregate full block examples do.I don't see where having the row counts in the output adds clarity and it just more text for the reader to mentally filter out.David J.",
"msg_date": "Fri, 5 Jun 2020 08:49:46 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: minor doc fix - garbage in example of result of unnest"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> On Fri, Jun 05, 2020 at 09:56:54AM -0400, Tom Lane wrote:\n>>> That's not \"garbage\", I put it there intentionally.\n\n> I don't see where having the row counts in the output adds clarity and it\n> just more text for the reader to mentally filter out.\n\nSeems like I'm getting outvoted here. If there aren't other votes,\nI'll change it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jun 2020 12:19:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: minor doc fix - garbage in example of result of unnest"
}
] |
[
{
"msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/11/sql-createdatabase.html\nDescription:\n\nMy understanding is that not copying the ACL is the (currently) expected\nbehavior when issuing CREATE DATABASE newdb WITH TEMPLATE my_tmpl;\r\nIt would be useful for the documentation to note this caveat.",
"msg_date": "Fri, 05 Jun 2020 14:31:34 +0000",
"msg_from": "PG Doc comments form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "create database with template doesn't copy ACL"
},
{
"msg_contents": "On Fri, Jun 5, 2020 at 02:31:34PM +0000, PG Doc comments form wrote:\n> The following documentation comment has been logged on the website:\n> \n> Page: https://www.postgresql.org/docs/11/sql-createdatabase.html\n> Description:\n> \n> My understanding is that not copying the ACL is the (currently) expected\n> behavior when issuing CREATE DATABASE newdb WITH TEMPLATE my_tmpl;\n> It would be useful for the documentation to note this caveat.\n\nUh, what ACLs are not copied?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 12 Jun 2020 17:29:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy ACL"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 05:29:51PM -0400, Bruce Momjian wrote:\n> On Fri, Jun 5, 2020 at 02:31:34PM +0000, PG Doc comments form wrote:\n> > The following documentation comment has been logged on the website:\n> > \n> > Page: https://www.postgresql.org/docs/11/sql-createdatabase.html\n> > Description:\n> > \n> > My understanding is that not copying the ACL is the (currently) expected\n> > behavior when issuing CREATE DATABASE newdb WITH TEMPLATE my_tmpl;\n> > It would be useful for the documentation to note this caveat.\n> \n> Uh, what ACLs are not copied?\n\nThe ACL on the database itself. For example:\n\npostgres@postgres[[local]#9655]=# CREATE DATABASE acl_template WITH IS_TEMPLATE = 1;\nCREATE DATABASE\npostgres@postgres[[local]#9655]=# REVOKE ALL ON DATABASE acl_template FROM PUBLIC;\nREVOKE\npostgres@postgres[[local]#9655]=# CREATE DATABASE acl_test WITH TEMPLATE = acl_template;\nCREATE DATABASE\npostgres@postgres[[local]#9655]=# SELECT datname, datacl FROM pg_database WHERE datname LIKE 'acl%';\n datname | datacl\n--------------+-------------------------\n acl_template | {postgres=CTc/postgres}\n acl_test |\n(2 rows)\n\nHere, the ACL on the new acl_test database does NOT match the ACL on the\nacl_template database upon which it is based.\n\nHope this helps,\n--Joe\n\n\n",
"msg_date": "Sun, 14 Jun 2020 07:26:13 +0000",
"msg_from": "Joseph Nahmias <joe@nahmias.net>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy ACL"
},
{
"msg_contents": "On Sun, Jun 14, 2020 at 07:26:13AM +0000, Joseph Nahmias wrote:\n> On Fri, Jun 12, 2020 at 05:29:51PM -0400, Bruce Momjian wrote:\n> > On Fri, Jun 5, 2020 at 02:31:34PM +0000, PG Doc comments form wrote:\n> > > The following documentation comment has been logged on the website:\n> > > \n> > > Page: https://www.postgresql.org/docs/11/sql-createdatabase.html\n> > > Description:\n> > > \n> > > My understanding is that not copying the ACL is the (currently) expected\n> > > behavior when issuing CREATE DATABASE newdb WITH TEMPLATE my_tmpl;\n> > > It would be useful for the documentation to note this caveat.\n> > \n> > Uh, what ACLs are not copied?\n> \n> The ACL on the database itself. For example:\n> \n> postgres@postgres[[local]#9655]=# CREATE DATABASE acl_template WITH IS_TEMPLATE = 1;\n> CREATE DATABASE\n> postgres@postgres[[local]#9655]=# REVOKE ALL ON DATABASE acl_template FROM PUBLIC;\n> REVOKE\n> postgres@postgres[[local]#9655]=# CREATE DATABASE acl_test WITH TEMPLATE = acl_template;\n> CREATE DATABASE\n> postgres@postgres[[local]#9655]=# SELECT datname, datacl FROM pg_database WHERE datname LIKE 'acl%';\n> datname | datacl\n> --------------+-------------------------\n> acl_template | {postgres=CTc/postgres}\n> acl_test |\n> (2 rows)\n> \n> Here, the ACL on the new acl_test database does NOT match the ACL on the\n> acl_template database upon which it is based.\n\n[I am moving this to the hackers list because I am not clear if this is a\ndocumentation problem or a bug.]\n\nEffectively, we have three levels of objects:\n\n\t1 global, cluster-wide, e.g., tablespaces, users\n\t2 database attributes, e.g., database encoding, database tablespace\n\t3 objects inside of databases\n\nWe don't clearly describe it that way though. Looking at the test:\n\n\tpsql -a <<END\n\tALTER DATABASE acl_template WITH IS_TEMPLATE false;\n\tDROP DATABASE IF EXISTS acl_template;\n\tDROP DATABASE IF EXISTS acl_test;\n\tCREATE DATABASE acl_template WITH IS_TEMPLATE = 1;\n\tREVOKE ALL ON DATABASE acl_template FROM PUBLIC;\n\tCREATE DATABASE acl_test WITH TEMPLATE = acl_template;\n\tSELECT datname, datacl FROM pg_database WHERE datname LIKE 'acl%';\n\t datname | datacl\n\t--------------+-------------------------\n\t acl_template | {postgres=CTc/postgres}\n\t acl_test | (null)\n\tEND\n\n\t$ pg_dump acl_template | grep CONNECT\n\n\t$ pg_dump --create acl_template | grep CONNECT\n\tREVOKE CONNECT,TEMPORARY ON DATABASE acl_template FROM PUBLIC;\n\n\t$ pg_dumpall --globals-only | grep CONNECT\n\n\t$ pg_dumpall | grep CONNECT\n\tREVOKE CONNECT,TEMPORARY ON DATABASE acl_template FROM PUBLIC;\n\nit appears database CONNECT and TEMPORARY are treated as database\nattributes (2) because they are only dumped when the database is being\ncreated, not by pg_dumpall --globals-only(1) or pg_dump(3).\n\nI am unclear if we should be copying the CONNECT and TEMPORARY\nattributes or documenting that CREATE DATABASE does not copy them.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sun, 14 Jun 2020 23:11:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy database ACL"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I am unclear if we should be copying the CONNECT and TEMPORARY\n> attributes or documenting that CREATE DATABASE does not copy them.\n\nWe should absolutely not copy them.\n\nAs an example, it'd make sense for an admin to revoke CONNECT on a\ntemplate database, just to help ensure that nobody modifies it.\nIf that propagated to every created database, it would be a complete\nfail.\n\nMoreover, since the ACLs of an object depend quite a bit on who the owner\nis, it'd make no sense to copy them to a new object that has a different\nowner. The granted-by fields would be wrong, if nothing else.\n\nIn practice, CREATE DATABASE never has copied any database-level property\nof the template DB, only its contents. (Well, I guess it copies encoding\nand collation by default, but those are descriptive of the contents.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 14 Jun 2020 23:24:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy database ACL"
},
{
"msg_contents": "On Sun, Jun 14, 2020 at 11:24:56PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I am unclear if we should be copying the CONNECT and TEMPORARY\n> > attributes or documenting that CREATE DATABASE does not copy them.\n> \n> We should absolutely not copy them.\n> \n> As an example, it'd make sense for an admin to revoke CONNECT on a\n> template database, just to help ensure that nobody modifies it.\n> If that propagated to every created database, it would be a complete\n> fail.\n> \n> Moreover, since the ACLs of an object depend quite a bit on who the owner\n> is, it'd make no sense to copy them to a new object that has a different\n> owner. The granted-by fields would be wrong, if nothing else.\n> \n> In practice, CREATE DATABASE never has copied any database-level property\n> of the template DB, only its contents. (Well, I guess it copies encoding\n> and collation by default, but those are descriptive of the contents.)\n\nWell, I thought we copied everything except things tha can be specified\nas different in CREATE DATABASE, though I can see why we would not copy\nthem. Should we document this or issue a notice about not copying\nnon-default database attributes?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sun, 14 Jun 2020 23:39:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy database ACL"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Well, I thought we copied everything except things tha can be specified\n> as different in CREATE DATABASE, though I can see why we would not copy\n> them. Should we document this or issue a notice about not copying\n> non-default database attributes?\n\nWe do not need a notice for behavior that (a) has stood for twenty years\nor so, and (b) is considerably less broken than any alternative would be.\nIf you feel the docs need improvement, have at that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jun 2020 00:14:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy database ACL"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 12:14:55AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Well, I thought we copied everything except things tha can be specified\n> > as different in CREATE DATABASE, though I can see why we would not copy\n> > them. Should we document this or issue a notice about not copying\n> > non-default database attributes?\n> \n> We do not need a notice for behavior that (a) has stood for twenty years\n> or so, and (b) is considerably less broken than any alternative would be.\n> If you feel the docs need improvement, have at that.\n\nWell, I realize it has been this way for a long time, and that no one\nelse has complained, but there should be a way for people to know what\nis being copied from the template and what is not. Do we have a clear\ndescription of what is copied and skipped?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 15 Jun 2020 10:10:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy database ACL"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 10:10:32AM -0400, Bruce Momjian wrote:\n> On Mon, Jun 15, 2020 at 12:14:55AM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > Well, I thought we copied everything except things tha can be specified\n> > > as different in CREATE DATABASE, though I can see why we would not copy\n> > > them. Should we document this or issue a notice about not copying\n> > > non-default database attributes?\n> > \n> > We do not need a notice for behavior that (a) has stood for twenty years\n> > or so, and (b) is considerably less broken than any alternative would be.\n> > If you feel the docs need improvement, have at that.\n> \n> Well, I realize it has been this way for a long time, and that no one\n> else has complained, but there should be a way for people to know what\n> is being copied from the template and what is not. Do we have a clear\n> description of what is copied and skipped?\n\nWe already mentioned that ALTER DATABASE settings are not copied, so the\nattached patch adds a mention that GRANT-level permissions are not\ncopied either.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Tue, 16 Jun 2020 06:10:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy database ACL"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 06:10:54AM -0400, Bruce Momjian wrote:\n> On Mon, Jun 15, 2020 at 10:10:32AM -0400, Bruce Momjian wrote:\n> > On Mon, Jun 15, 2020 at 12:14:55AM -0400, Tom Lane wrote:\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > Well, I thought we copied everything except things tha can be specified\n> > > > as different in CREATE DATABASE, though I can see why we would not copy\n> > > > them. Should we document this or issue a notice about not copying\n> > > > non-default database attributes?\n> > > \n> > > We do not need a notice for behavior that (a) has stood for twenty years\n> > > or so, and (b) is considerably less broken than any alternative would be.\n> > > If you feel the docs need improvement, have at that.\n> > \n> > Well, I realize it has been this way for a long time, and that no one\n> > else has complained, but there should be a way for people to know what\n> > is being copied from the template and what is not. Do we have a clear\n> > description of what is copied and skipped?\n> \n> We already mentioned that ALTER DATABASE settings are not copied, so the\n> attached patch adds a mention that GRANT-level permissions are not\n> copied either.\n\nPatch applied to all supported versions. Thanks for the discussion.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 25 Jun 2020 18:23:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: create database with template doesn't copy database ACL"
}
] |
[
{
"msg_contents": "Hello,\n\nCurrently, in case of alternative subplans that do hashed vs per-row \nlookups,\nthe per-row estimate is used when planning the rest of the query.\nIt's also coded in not quite an explicit way.\n\nIn [1] we found a situation where it leads to a suboptimal plan,\nas it bloats the overall cost into large figures,\na decision related to an outer part of the plan look negligible to the \nplanner,\nand as a result it doesn't elaborate on choosing the optimal one.\n\nThe patch is to fix it. Our linear model for costs cannot quite accommodate\nthe piecewise linear matter of alternative subplans,\nso it is based on ugly heuristics and still cannot be very precise,\nbut I think it's better than the current one.\n\nThoughts?\n\nBest, Alex\n\n[1] \nhttps://www.postgresql.org/message-id/flat/ff42b25b-ff03-27f8-ed11-b8255d658cd5%40imap.cc",
"msg_date": "Fri, 5 Jun 2020 17:08:02 +0100",
"msg_from": "Alexey Bashtanov <bashtanov@imap.cc>",
"msg_from_op": true,
"msg_subject": "Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "On Fri, Jun 5, 2020 at 9:08 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:\n\n>\n> In [1] we found a situation where it leads to a suboptimal plan,\n> as it bloats the overall cost into large figures,\n> a decision related to an outer part of the plan look negligible to the\n> planner,\n> and as a result it doesn't elaborate on choosing the optimal one.\n>\n>\nI've just started looking at this patch today, but I was wondering if\nyou might include a test case which minimally reproduces the original\nproblem you had.\nThe only plan diff I see is in updatable_views.sql, and it doesn't\nillustrate the problem as well as a more straightforward SELECT query\nwith EXISTS sublink might.\n\nAfter putting in some logging, I see that there are only a\nfew non-catalog queries which exercise this code path.\nThis query from groupingsets.sql is an example of one such query:\n\nselect ten, sum(distinct four) from onek a\ngroup by grouping sets((ten,four),(ten))\nhaving exists (select 1 from onek b where sum(distinct a.four) = b.four);\n\nBut, the chosen plan for this query stays the same.\n\nIt would be helpful to see a query where a different plan is chosen\nbecause of this change that is not from updatable_views.sql.\n\n-- \nMelanie Plageman\n\nOn Fri, Jun 5, 2020 at 9:08 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:\nIn [1] we found a situation where it leads to a suboptimal plan,\nas it bloats the overall cost into large figures,\na decision related to an outer part of the plan look negligible to the \nplanner,\nand as a result it doesn't elaborate on choosing the optimal one.\nI've just started looking at this patch today, but I was wondering ifyou might include a test case which minimally reproduces the originalproblem you had. The only plan diff I see is in updatable_views.sql, and it doesn'tillustrate the problem as well as a more straightforward SELECT querywith EXISTS sublink might.After putting in some logging, I see that there are only afew non-catalog queries which exercise this code path.This query from groupingsets.sql is an example of one such query: select ten, sum(distinct four) from onek agroup by grouping sets((ten,four),(ten))having exists (select 1 from onek b where sum(distinct a.four) = b.four);But, the chosen plan for this query stays the same.It would be helpful to see a query where a different plan is chosenbecause of this change that is not from updatable_views.sql.-- Melanie Plageman",
"msg_date": "Tue, 16 Jun 2020 18:15:50 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "On Fri, Jun 5, 2020 at 9:08 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:\n\n>\n> In [1] we found a situation where it leads to a suboptimal plan,\n> as it bloats the overall cost into large figures,\n> a decision related to an outer part of the plan look negligible to the\n> planner,\n> and as a result it doesn't elaborate on choosing the optimal one.\n>\n>\nDid this geometric average method result in choosing the desired plan for\nthis case?\n\n\n> The patch is to fix it. Our linear model for costs cannot quite accommodate\n> the piecewise linear matter of alternative subplans,\n> so it is based on ugly heuristics and still cannot be very precise,\n> but I think it's better than the current one.\n>\n> Thoughts?\n>\n>\nIs there another place in planner where two alternatives are averaged\ntogether and that cost is used?\n\nTo me, it feels a little bit weird that we are averaging together the\nstartup cost of a plan which will always have a 0 startup cost and a\nplan that will always have a non-zero startup cost and the per tuple\ncost of a plan that will always have a negligible per tuple cost and one\nthat might have a very large per tuple cost.\n\nI guess it feels different because instead of comparing alternatives you\nare blending them.\n\nI don't have any academic basis for saying that the alternatives costs\nshouldn't be averaged together for use in the rest of the plan, so I\ncould definitely be wrong.\n\n-- \nMelanie Plageman\n\nOn Fri, Jun 5, 2020 at 9:08 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:\nIn [1] we found a situation where it leads to a suboptimal plan,\nas it bloats the overall cost into large figures,\na decision related to an outer part of the plan look negligible to the \nplanner,\nand as a result it doesn't elaborate on choosing the optimal one.\nDid this geometric average method result in choosing the desired plan forthis case? \nThe patch is to fix it. Our linear model for costs cannot quite accommodate\nthe piecewise linear matter of alternative subplans,\nso it is based on ugly heuristics and still cannot be very precise,\nbut I think it's better than the current one.\n\nThoughts?\nIs there another place in planner where two alternatives are averagedtogether and that cost is used?To me, it feels a little bit weird that we are averaging together thestartup cost of a plan which will always have a 0 startup cost and aplan that will always have a non-zero startup cost and the per tuplecost of a plan that will always have a negligible per tuple cost and onethat might have a very large per tuple cost.I guess it feels different because instead of comparing alternatives youare blending them.I don't have any academic basis for saying that the alternatives costsshouldn't be averaged together for use in the rest of the plan, so Icould definitely be wrong. -- Melanie Plageman",
"msg_date": "Wed, 17 Jun 2020 18:21:58 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 06:21:58PM -0700, Melanie Plageman wrote:\n>On Fri, Jun 5, 2020 at 9:08 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:\n>\n>>\n>> In [1] we found a situation where it leads to a suboptimal plan,\n>> as it bloats the overall cost into large figures,\n>> a decision related to an outer part of the plan look negligible to the\n>> planner,\n>> and as a result it doesn't elaborate on choosing the optimal one.\n>>\n>>\n>Did this geometric average method result in choosing the desired plan for\n>this case?\n>\n>\n>> The patch is to fix it. Our linear model for costs cannot quite accommodate\n>> the piecewise linear matter of alternative subplans,\n>> so it is based on ugly heuristics and still cannot be very precise,\n>> but I think it's better than the current one.\n>>\n>> Thoughts?\n>>\n>>\n>Is there another place in planner where two alternatives are averaged\n>together and that cost is used?\n>\n>To me, it feels a little bit weird that we are averaging together the\n>startup cost of a plan which will always have a 0 startup cost and a\n>plan that will always have a non-zero startup cost and the per tuple\n>cost of a plan that will always have a negligible per tuple cost and one\n>that might have a very large per tuple cost.\n>\n>I guess it feels different because instead of comparing alternatives you\n>are blending them.\n>\n>I don't have any academic basis for saying that the alternatives costs\n>shouldn't be averaged together for use in the rest of the plan, so I\n>could definitely be wrong.\n>\n\nI agree it feels weird. Even if it actually improved the problematic\ncase, I think it'll be quite hard to convince ourselves this helps in\ngeneral. For example, for cases that actually end up using the first\nplan, this is bound to make the estimates worse. I find it hard to\nbelieve it won't cause regressions in at least some cases.\n\nMaybe this heuristics really is better than the old one, but I think we\nneed to understand why - a single query probably is not enough.\n\nI think the crucial limitation here is that we don't know which of the\nalternative plans will be used. Is there a chance to improve this,\nperhaps by making some sort of guess?\n\nI'm not particularly familiar with AlternativeSubPlans, but I see we're\npicking the one in nodeSubplan.c based on plan_rows. Can't we do the\nsame thing in cost_qual_eval_walker?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 21 Jun 2020 01:30:30 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I'm not particularly familiar with AlternativeSubPlans, but I see we're\n> picking the one in nodeSubplan.c based on plan_rows. Can't we do the\n> same thing in cost_qual_eval_walker?\n\nNope. The entire reason why we have that kluge is that we don't know\nuntil much later how many times we expect to execute the subplan.\nAlternativeSubPlan allows the decision which subplan form to use to be\npostponed till runtime; but when we're doing things like estimating the\ncost and selectivity of a where-clause, we don't know that.\n\nMaybe there's some way to recast things to avoid that problem,\nbut I have little clue what it'd look like.\n\nI agree that averaging together the costs of the alternatives seems\nwrong in principle. It's going to be one or the other, not some\nquantum-mechanical superposition. Maybe there's a case for taking the\nmin costs (if you're feeling lucky) or the max costs (if you're not),\non the belief that the executor will/will not pick the choice that\ncontributes least to the total query cost. But I have a feeling that\neither of those would distort our estimates too much. The existing\ncosting behavior at least can be seen to match *one* actual runtime\nbehavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Jun 2020 21:17:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "I wrote:\n> Nope. The entire reason why we have that kluge is that we don't know\n> until much later how many times we expect to execute the subplan.\n> AlternativeSubPlan allows the decision which subplan form to use to be\n> postponed till runtime; but when we're doing things like estimating the\n> cost and selectivity of a where-clause, we don't know that.\n\n> Maybe there's some way to recast things to avoid that problem,\n> but I have little clue what it'd look like.\n\nActually ... maybe it's not that bad. Clearly there would be a\ncircularity issue for selectivity estimation, but all the alternatives\nshould have the same selectivity. Cost estimation is a different story:\nby the time we need to do cost estimation for a subexpression, we do in\nmany cases have an idea how often the subexpression will be executed.\n\nI experimented with adding a number-of-evaluations parameter to\ncost_qual_eval, and found that the majority of call sites do have\nsomething realistic they can pass. The attached very-much-WIP\npatch shows my results so far. There's a lot of loose ends:\n\n* Any call site using COST_QUAL_EVAL_DUMMY_NUM_EVALS is a potential spot\nfor future improvement. The only one that seems like it might be\nfundamentally unsolvable is cost_subplan's usage; but that would only\nmatter if a subplan's testexpr contains an AlternativeSubPlan, which is\nprobably a negligible case in practice. The other ones seem to just\nrequire more refactoring than I cared to do on a Sunday afternoon.\n\n* I did not do anything for postgres_fdw.c beyond making it compile.\nWe can surely do better there, but it might require some rethinking\nof the way that plan costs get cached.\n\n* The merge and hash join costsize.c functions estimate costs of qpquals\n(i.e. quals to be applied at the join that are not being used as merge\nor hash conditions) by computing cost_qual_eval of the whole\njoinrestrictlist and then subtracting off the cost of the merge or hash\nquals. This is kind of broken if we want to use different num_eval\nestimates for the qpquals and the merge/hash quals, which I think we do.\nThis probably just needs some refactoring to fix. We also need to save\nthe relevant rowcounts in the join Path nodes so that createplan.c can\ndo the right thing.\n\n* I think it might be possible to improve the situation in\nget_agg_clause_costs() if we're willing to postpone collection\nof the actual aggregate costs till later. This'd require two\npasses over the aggregate expressions, but that seems like it\nmight not be terribly expensive. (I'd be inclined to also look\nat the idea of merging duplicate agg calls at plan time not\nrun time, if we refactor that.)\n\n* I had to increase the number of table rows in one updatable_views.sql\ntest to keep the plans the same. Without that, the planner realized\nthat a seqscan would be cheaper than an indexscan. The result wasn't\nwrong exactly, but it failed to prove that leakproof quals could be\nused as indexquals, so I think we need to keep the plan choice the same.\n\nAnyway, this is kind of invasive, but I think it shouldn't really\nadd noticeable costs as long as we save relevant rowcounts rather\nthan recomputing them in createplan.c. Is it worth doing? I dunno.\nAlternativeSubPlan is pretty much a backwater, I think --- if it\nwere interesting performance-wise to a lot of people, more would\nhave been done with it by now.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 21 Jun 2020 21:39:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "Hi Melanie,\n\nSorry for the delay.\n\n> I've just started looking at this patch today, but I was wondering if\n> you might include a test case which minimally reproduces the original\n> problem you had.\nI could reproduce it with an easier generated data set, please see attached.\n\nHowever, to be honest with you, while searching I encountered a few \nexamples of the opposite behavior,\nwhen the patched version was slower than the master branch.\nSo I'm not so sure whether we should use the patch, maybe we should \nrather consider Tom's approach.\n\nBest, Alex",
"msg_date": "Fri, 24 Jul 2020 15:55:57 +0100",
"msg_from": "Alexey Bashtanov <bashtanov@imap.cc>",
"msg_from_op": true,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "Hi Tom,\n\nsorry for the delay,\n> I experimented with adding a number-of-evaluations parameter to\n> cost_qual_eval, and found that the majority of call sites do have\n> something realistic they can pass. The attached very-much-WIP\n> patch shows my results so far. There's a lot of loose ends:\nI like the idea, so if we alternative subplans remain there\nI think we should implement it.\n\nBest, Alex\n\n\n",
"msg_date": "Fri, 24 Jul 2020 15:57:39 +0100",
"msg_from": "Alexey Bashtanov <bashtanov@imap.cc>",
"msg_from_op": true,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 9:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Nope. The entire reason why we have that kluge is that we don't know\n> > until much later how many times we expect to execute the subplan.\n> > AlternativeSubPlan allows the decision which subplan form to use to be\n> > postponed till runtime; but when we're doing things like estimating the\n> > cost and selectivity of a where-clause, we don't know that.\n>\n> > Maybe there's some way to recast things to avoid that problem,\n> > but I have little clue what it'd look like.\n>\n> Actually ... maybe it's not that bad. Clearly there would be a\n> circularity issue for selectivity estimation, but all the alternatives\n> should have the same selectivity. Cost estimation is a different story:\n> by the time we need to do cost estimation for a subexpression, we do in\n> many cases have an idea how often the subexpression will be executed.\n>\n>\nI read your idea of \"ripping out all executor support for AlternativeSubPlan\n and instead having the planner replace an AlternativeSubPlan with\n the desired specific SubPlan somewhere late in planning, possibly\nsetrefs.c.\"\nin [1]. I was thinking that if we can do such a replacement sooner,\nfor example once we know the num_calls for the subplans, Unknown if it\nis possible though. If we can, then we can handle the issue here as well.\n\nThe attached is a very PoC version, I'm not sure if it is the right\ndirection\nto go. I'm sorry that I still need more time to understand your solution\nbelow but I'm too excited about your original idea.\n\n[1] https://www.postgresql.org/message-id/1992952.1592785225@sss.pgh.pa.us\n\n\n\n> I experimented with adding a number-of-evaluations parameter to\n> cost_qual_eval, and found that the majority of call sites do have\n> something realistic they can pass. The attached very-much-WIP\n> patch shows my results so far. There's a lot of loose ends:\n>\n> * Any call site using COST_QUAL_EVAL_DUMMY_NUM_EVALS is a potential spot\n> for future improvement. The only one that seems like it might be\n> fundamentally unsolvable is cost_subplan's usage; but that would only\n> matter if a subplan's testexpr contains an AlternativeSubPlan, which is\n> probably a negligible case in practice. The other ones seem to just\n> require more refactoring than I cared to do on a Sunday afternoon.\n>\n> * I did not do anything for postgres_fdw.c beyond making it compile.\n> We can surely do better there, but it might require some rethinking\n> of the way that plan costs get cached.\n>\n> * The merge and hash join costsize.c functions estimate costs of qpquals\n> (i.e. quals to be applied at the join that are not being used as merge\n> or hash conditions) by computing cost_qual_eval of the whole\n> joinrestrictlist and then subtracting off the cost of the merge or hash\n> quals. This is kind of broken if we want to use different num_eval\n> estimates for the qpquals and the merge/hash quals, which I think we do.\n> This probably just needs some refactoring to fix. We also need to save\n> the relevant rowcounts in the join Path nodes so that createplan.c can\n> do the right thing.\n>\n> * I think it might be possible to improve the situation in\n> get_agg_clause_costs() if we're willing to postpone collection\n> of the actual aggregate costs till later. This'd require two\n> passes over the aggregate expressions, but that seems like it\n> might not be terribly expensive. (I'd be inclined to also look\n> at the idea of merging duplicate agg calls at plan time not\n> run time, if we refactor that.)\n>\n> * I had to increase the number of table rows in one updatable_views.sql\n> test to keep the plans the same. Without that, the planner realized\n> that a seqscan would be cheaper than an indexscan. The result wasn't\n> wrong exactly, but it failed to prove that leakproof quals could be\n> used as indexquals, so I think we need to keep the plan choice the same.\n>\n> Anyway, this is kind of invasive, but I think it shouldn't really\n> add noticeable costs as long as we save relevant rowcounts rather\n> than recomputing them in createplan.c. Is it worth doing? I dunno.\n> AlternativeSubPlan is pretty much a backwater, I think --- if it\n> were interesting performance-wise to a lot of people, more would\n> have been done with it by now.\n>\n> regards, tom lane\n>\n>\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Mon, 17 Aug 2020 22:12:39 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 10:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Mon, Jun 22, 2020 at 9:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> I wrote:\n>> > Nope. The entire reason why we have that kluge is that we don't know\n>> > until much later how many times we expect to execute the subplan.\n>> > AlternativeSubPlan allows the decision which subplan form to use to be\n>> > postponed till runtime; but when we're doing things like estimating the\n>> > cost and selectivity of a where-clause, we don't know that.\n>>\n>> > Maybe there's some way to recast things to avoid that problem,\n>> > but I have little clue what it'd look like.\n>>\n>> Actually ... maybe it's not that bad. Clearly there would be a\n>> circularity issue for selectivity estimation, but all the alternatives\n>> should have the same selectivity. Cost estimation is a different story:\n>> by the time we need to do cost estimation for a subexpression, we do in\n>> many cases have an idea how often the subexpression will be executed.\n>>\n>>\n> I read your idea of \"ripping out all executor support for\n> AlternativeSubPlan\n> and instead having the planner replace an AlternativeSubPlan with\n> the desired specific SubPlan somewhere late in planning, possibly\n> setrefs.c.\"\n> in [1]. I was thinking that if we can do such a replacement sooner,\n> for example once we know the num_calls for the subplans, Unknown if it\n> is possible though. If we can, then we can handle the issue here as well.\n>\n> The attached is a very PoC version, I'm not sure if it is the right\n> direction\n> to go.\n>\n\nThe idea behind it is if we have a RelOptInfo which have\nsome AlternativeSubPlan,\nand assume these subplans have some correlated vars which can be expressed\nas\ndeps_relids. Then we can convert the AlternativeSubPlan to SubPlan once\nbms_is_subset(deps_relids, rel->relids). My patch is able to fix the\nissue reported\nhere and it only converts the AlternativeSubPlan in rel->reltarget for demo\npurpose.\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Aug 17, 2020 at 10:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Jun 22, 2020 at 9:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Nope. The entire reason why we have that kluge is that we don't know\n> until much later how many times we expect to execute the subplan.\n> AlternativeSubPlan allows the decision which subplan form to use to be\n> postponed till runtime; but when we're doing things like estimating the\n> cost and selectivity of a where-clause, we don't know that.\n\n> Maybe there's some way to recast things to avoid that problem,\n> but I have little clue what it'd look like.\n\nActually ... maybe it's not that bad. Clearly there would be a\ncircularity issue for selectivity estimation, but all the alternatives\nshould have the same selectivity. Cost estimation is a different story:\nby the time we need to do cost estimation for a subexpression, we do in\nmany cases have an idea how often the subexpression will be executed.\nI read your idea of \"ripping out all executor support for AlternativeSubPlan and instead having the planner replace an AlternativeSubPlan with the desired specific SubPlan somewhere late in planning, possibly setrefs.c.\"in [1]. I was thinking that if we can do such a replacement sooner, for example once we know the num_calls for the subplans, Unknown if itis possible though. If we can, then we can handle the issue here as well.The attached is a very PoC version, I'm not sure if it is the right directionto go. The idea behind it is if we have a RelOptInfo which have some AlternativeSubPlan,and assume these subplans have some correlated vars which can be expressed asdeps_relids. Then we can convert the AlternativeSubPlan to SubPlan once bms_is_subset(deps_relids, rel->relids). My patch is able to fix the issue reportedhere and it only converts the AlternativeSubPlan in rel->reltarget for demo purpose. -- Best RegardsAndy Fan",
"msg_date": "Wed, 26 Aug 2020 16:21:35 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 4:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Mon, Aug 17, 2020 at 10:12 PM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n>\n>>\n>>\n>> On Mon, Jun 22, 2020 at 9:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>>> I wrote:\n>>> > Nope. The entire reason why we have that kluge is that we don't know\n>>> > until much later how many times we expect to execute the subplan.\n>>> > AlternativeSubPlan allows the decision which subplan form to use to be\n>>> > postponed till runtime; but when we're doing things like estimating the\n>>> > cost and selectivity of a where-clause, we don't know that.\n>>>\n>>> > Maybe there's some way to recast things to avoid that problem,\n>>> > but I have little clue what it'd look like.\n>>>\n>>> Actually ... maybe it's not that bad. Clearly there would be a\n>>> circularity issue for selectivity estimation, but all the alternatives\n>>> should have the same selectivity. Cost estimation is a different story:\n>>> by the time we need to do cost estimation for a subexpression, we do in\n>>> many cases have an idea how often the subexpression will be executed.\n>>>\n>>>\n>> I read your idea of \"ripping out all executor support for\n>> AlternativeSubPlan\n>> and instead having the planner replace an AlternativeSubPlan with\n>> the desired specific SubPlan somewhere late in planning, possibly\n>> setrefs.c.\"\n>> in [1]. I was thinking that if we can do such a replacement sooner,\n>> for example once we know the num_calls for the subplans, Unknown if it\n>> is possible though. If we can, then we can handle the issue here as well.\n>>\n>> The attached is a very PoC version, I'm not sure if it is the right\n>> direction\n>> to go.\n>>\n>\n> The idea behind it is if we have a RelOptInfo which have\n> some AlternativeSubPlan,\n> and assume these subplans have some correlated vars which can be expressed\n> as\n> deps_relids. Then we can convert the AlternativeSubPlan to SubPlan once\n> bms_is_subset(subplan->deps_relids, rel->relids).\n>\n\nThe way of figuring out subplan->deps_relids was wrong in my patch, I will\nfix it later.\nBut the general idea is the same.\n\n\n> My patch is able to fix the issue reported\n> here and it only converts the AlternativeSubPlan in rel->reltarget for\n> demo purpose.\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Aug 26, 2020 at 4:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Aug 17, 2020 at 10:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Jun 22, 2020 at 9:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Nope. The entire reason why we have that kluge is that we don't know\n> until much later how many times we expect to execute the subplan.\n> AlternativeSubPlan allows the decision which subplan form to use to be\n> postponed till runtime; but when we're doing things like estimating the\n> cost and selectivity of a where-clause, we don't know that.\n\n> Maybe there's some way to recast things to avoid that problem,\n> but I have little clue what it'd look like.\n\nActually ... maybe it's not that bad. Clearly there would be a\ncircularity issue for selectivity estimation, but all the alternatives\nshould have the same selectivity. Cost estimation is a different story:\nby the time we need to do cost estimation for a subexpression, we do in\nmany cases have an idea how often the subexpression will be executed.\nI read your idea of \"ripping out all executor support for AlternativeSubPlan and instead having the planner replace an AlternativeSubPlan with the desired specific SubPlan somewhere late in planning, possibly setrefs.c.\"in [1]. I was thinking that if we can do such a replacement sooner, for example once we know the num_calls for the subplans, Unknown if itis possible though. If we can, then we can handle the issue here as well.The attached is a very PoC version, I'm not sure if it is the right directionto go. The idea behind it is if we have a RelOptInfo which have some AlternativeSubPlan,and assume these subplans have some correlated vars which can be expressed asdeps_relids. Then we can convert the AlternativeSubPlan to SubPlan once bms_is_subset(subplan->deps_relids, rel->relids). The way of figuring out subplan->deps_relids was wrong in my patch, I will fix it later.But the general idea is the same. My patch is able to fix the issue reportedhere and it only converts the AlternativeSubPlan in rel->reltarget for demo purpose. -- Best RegardsAndy Fan\n-- Best RegardsAndy Fan",
"msg_date": "Sat, 29 Aug 2020 06:38:36 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve planner cost estimations for alternative subplans"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile preparing my pgcon talk I noticed that our hash-agg performance\ndegraded noticeably. Looks to me like it's due to the spilling-hashagg\nchanges.\n\nSample benchmark:\n\nconfig:\n-c huge_pages=on -c shared_buffers=32GB -c jit=0 -c max_parallel_workers_per_gather=0\n(largely just to reduce variance)\n\ndata prep:\nCREATE TABLE fewgroups_many_rows AS SELECT (random() * 4)::int cat, (random()*10000)::int val FROM generate_series(1, 100000000);\nVACUUM (FREEZE, ANALYZE) fewgroups_many_rows;\n\ntest prep:\nCREATE EXTENSION IF NOT EXISTS pg_prewarm;SELECT pg_prewarm('fewgroups_many_rows', 'buffer');\n\ntest:\nSELECT cat, count(*) FROM fewgroups_many_rows GROUP BY 1;\n\nUsing best-of-three timing:\n\n12 12221.031 ms\nmaster 13855.129 ms\n\nWhile not the end of the world, that's a definitely noticable and\nreproducible slowdown (~12%).\n\nI don't think this is actually an inherent cost, but a question of how\nthe code ended up being organized. Here's a perf diff of profiles for\nboth versions:\n\n# Baseline Delta Abs Shared Object Symbol\n# ........ ......... ................ .........................................\n#\n +6.70% postgres [.] LookupTupleHashEntryHash\n +6.37% postgres [.] prepare_hash_slot\n +4.74% postgres [.] TupleHashTableHash_internal.isra.0\n 20.36% -2.89% postgres [.] ExecInterpExpr\n 6.31% -2.73% postgres [.] lookup_hash_entries\n +2.36% postgres [.] lookup_hash_entry\n +2.14% postgres [.] ExecJustAssignScanVar\n 2.28% +1.97% postgres [.] ExecScan\n 2.54% +1.93% postgres [.] MemoryContextReset\n 3.84% -1.86% postgres [.] SeqNext\n 10.19% -1.50% postgres [.] tts_buffer_heap_getsomeattrs\n +1.42% postgres [.] hash_bytes_uint32\n +1.39% postgres [.] TupleHashTableHash\n +1.10% postgres [.] tts_virtual_clear\n 3.36% -0.74% postgres [.] ExecAgg\n +0.45% postgres [.] CheckForSerializableConflictOutNeeded\n 0.25% +0.44% postgres [.] hashint4\n 5.80% -0.35% postgres [.] tts_minimal_getsomeattrs\n 1.91% -0.33% postgres [.] heap_getnextslot\n 4.86% -0.32% postgres [.] heapgettup_pagemode\n 1.46% -0.32% postgres [.] tts_minimal_clear\n\nWhile some of this is likely is just noise, it's pretty clear that we\nspend a substantial amount of additional time below\nlookup_hash_entries().\n\nAnd looking at the code, I'm not too surprised:\n\nBefore there was basically one call from nodeAgg.c to execGrouping.c for\neach tuple and hash table. Now it's a lot more complicated:\n1) nodeAgg.c: prepare_hash_slot()\n2) execGrouping.c: TupleHashTableHash()\n3) nodeAgg.c: lookup_hash_entry()\n4) execGrouping.c: LookupTupleHashEntryHash()\n\nFor each of these data needs to be peeled out of one or more of AggState\n/ AggStatePerHashData / TupleHashTable. There's no way the compiler can\nknow that nothing inside those changes, therefore it has to reload the\ncontents repeatedly. By my look at the profiles, that's where most of\nthe time is going.\n\nThere's also the issue that the signalling whether to insert / not to\ninsert got unnecessarily complicated. There's several checks:\n1) lookup_hash_entry() (p_isnew = aggstate->hash_spill_mode ? NULL : &isnew;)\n2) LookupTupleHashEntry_internal() (if (isnew))\n3) lookup_hash_entry() (if (entry == NULL) and if (isnew))\n4) lookup_hash_entries() if (!in_hash_table)\n\nNot performance related: I am a bit confused why the new per-hash stuff\nin lookup_hash_entries() isn't in lookup_hash_entry()? I assume that's\nbecause of agg_refill_hash_table()?\n\n\nWhy isn't the flow more like this:\n1) prepare_hash_slot()\n2) if (aggstate->hash_spill_mode) goto 3; else goto 4\n3) entry = LookupTupleHashEntry(&hash); if (!entry) hashagg_spill_tuple();\n4) InsertTupleHashEntry(&hash, &isnew); if (isnew) initialize(entry)\n\nThat way there's again exactly one call to execGrouping.c, there's no\nneed for nodeAgg to separately compute the hash, there's far fewer\nbranches...\n\nDoing things this way might perhaps make agg_refill_hash_table() a tiny\nbit more complicated, but it'll also avoid the slowdown for the vast\nmajority of cases where we're not spilling.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jun 2020 21:11:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "hashagg slowdown due to spill changes"
},
{
"msg_contents": "On 2020-Jun-05, Andres Freund wrote:\n\n> While preparing my pgcon talk I noticed that our hash-agg performance\n> degraded noticeably. Looks to me like it's due to the spilling-hashagg\n> changes.\n\nJeff, what are your thoughts on this?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 16:15:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Fri, 2020-06-05 at 21:11 -0700, Andres Freund wrote:\n> Before there was basically one call from nodeAgg.c to execGrouping.c\n> for\n> each tuple and hash table. Now it's a lot more complicated:\n> 1) nodeAgg.c: prepare_hash_slot()\n> 2) execGrouping.c: TupleHashTableHash()\n> 3) nodeAgg.c: lookup_hash_entry()\n> 4) execGrouping.c: LookupTupleHashEntryHash()\n\nThe reason that I did it that way was to be able to store the hash\nalong with the saved tuple (similar to what HashJoin does), which\navoids recalculation.\n\nThat could be a nice savings for some cases, like when work_mem is\nsmall but the data still fits in system memory, which I expect to be\nfairly common. But based on your numbers, it might be a bad trade-off\noverall.\n\n> Why isn't the flow more like this:\n> 1) prepare_hash_slot()\n> 2) if (aggstate->hash_spill_mode) goto 3; else goto 4\n> 3) entry = LookupTupleHashEntry(&hash); if (!entry)\n> hashagg_spill_tuple();\n> 4) InsertTupleHashEntry(&hash, &isnew); if (isnew) initialize(entry)\n\nI'll work up a patch to refactor this. I'd still like to see if we can\npreserve the calculate-hash-once behavior somehow.\n\nRegards,\n\tJeff Davis\n \n\n\n\n",
"msg_date": "Mon, 08 Jun 2020 13:41:29 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Fri, 2020-06-05 at 21:11 -0700, Andres Freund wrote:\n> Why isn't the flow more like this:\n> 1) prepare_hash_slot()\n> 2) if (aggstate->hash_spill_mode) goto 3; else goto 4\n> 3) entry = LookupTupleHashEntry(&hash); if (!entry)\n> hashagg_spill_tuple();\n> 4) InsertTupleHashEntry(&hash, &isnew); if (isnew) initialize(entry)\n\nI see, you are suggesting that I change around the execGrouping.c\nsignatures to return the hash, which will avoid the extra call. That\nmakes more sense.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n\n",
"msg_date": "Mon, 08 Jun 2020 13:55:47 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-08 13:41:29 -0700, Jeff Davis wrote:\n> On Fri, 2020-06-05 at 21:11 -0700, Andres Freund wrote:\n> > Before there was basically one call from nodeAgg.c to execGrouping.c\n> > for\n> > each tuple and hash table. Now it's a lot more complicated:\n> > 1) nodeAgg.c: prepare_hash_slot()\n> > 2) execGrouping.c: TupleHashTableHash()\n> > 3) nodeAgg.c: lookup_hash_entry()\n> > 4) execGrouping.c: LookupTupleHashEntryHash()\n> \n> The reason that I did it that way was to be able to store the hash\n> along with the saved tuple (similar to what HashJoin does), which\n> avoids recalculation.\n\nThat makes sense. But then you can just use a separate call into\nexecGrouping for that purpose.\n\n\n> > Why isn't the flow more like this:\n> > 1) prepare_hash_slot()\n> > 2) if (aggstate->hash_spill_mode) goto 3; else goto 4\n> > 3) entry = LookupTupleHashEntry(&hash); if (!entry)\n> > hashagg_spill_tuple();\n> > 4) InsertTupleHashEntry(&hash, &isnew); if (isnew) initialize(entry)\n> \n> I'll work up a patch to refactor this. I'd still like to see if we can\n> preserve the calculate-hash-once behavior somehow.\n\nCool!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Jun 2020 14:08:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Fri, 2020-06-05 at 21:11 -0700, Andres Freund wrote:\n> Hi,\n> \n> While preparing my pgcon talk I noticed that our hash-agg performance\n> degraded noticeably. Looks to me like it's due to the spilling-\n> hashagg\n> changes.\n\nAttached a proposed fix. (Might require some minor cleanup).\n\nThe only awkward part is that LookupTupleHashEntry() needs a new out\nparameter to pass the hash value back to the caller. Ordinarily, the\ncaller can get that from the returned entry, but if isnew==NULL, then\nthe function might return NULL (and the caller wouldn't have an entry\nfrom which to read the hash value).\n\nRegards,\n\tJeff Davis",
"msg_date": "Wed, 10 Jun 2020 18:15:39 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Hi, \n\nOn June 10, 2020 6:15:39 PM PDT, Jeff Davis <pgsql@j-davis.com> wrote:\n>On Fri, 2020-06-05 at 21:11 -0700, Andres Freund wrote:\n>> Hi,\n>> \n>> While preparing my pgcon talk I noticed that our hash-agg performance\n>> degraded noticeably. Looks to me like it's due to the spilling-\n>> hashagg\n>> changes.\n>\n>Attached a proposed fix. (Might require some minor cleanup).\n>\n>The only awkward part is that LookupTupleHashEntry() needs a new out\n>parameter to pass the hash value back to the caller. Ordinarily, the\n>caller can get that from the returned entry, but if isnew==NULL, then\n>the function might return NULL (and the caller wouldn't have an entry\n>from which to read the hash value).\n\nGreat!\n\nDid you run any performance tests?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 11 Jun 2020 10:45:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Thu, 2020-06-11 at 10:45 -0700, Andres Freund wrote:\n> Did you run any performance tests?\n\nYes, I reproduced your ~12% regression from V12, and this patch nearly\neliminated it for me.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 11 Jun 2020 11:14:02 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-11 11:14:02 -0700, Jeff Davis wrote:\n> On Thu, 2020-06-11 at 10:45 -0700, Andres Freund wrote:\n> > Did you run any performance tests?\n>\n> Yes, I reproduced your ~12% regression from V12, and this patch nearly\n> eliminated it for me.\n\nI spent a fair bit of time looking at the difference. Jeff had let me\nknow on chat that he was still seeing some difference, but couldn't\nquite figure out where that was.\n\nTrying it out myself, I observed that the patch helped, but not that\nmuch. After a bit I found one major reason for why:\nLookupTupleHashEntryHash() assigned the hash to pointer provided by the\ncaller's before doing the insertion. That ended up causing a pipeline\nstall (I assume it's store forwarding, but not sure). Moving the\nassignment to the caller variable to after the insertion got rid of\nthat.\n\nIt got within 3-4% after that change. I did a number of small\nmicrooptimizations that each helped, but didn't get quite get to the\nlevel of 12.\n\nFinally I figured out that that's due to an issue outside of nodeAgg.c\nitself:\n\ncommit 4cad2534da6d17067d98cf04be2dfc1bda8f2cd0\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\nDate: 2020-05-31 14:43:13 +0200\n\n Use CP_SMALL_TLIST for hash aggregate\n\nDue to this change we end up with an additional projection in queries\nlike this:\n\npostgres[212666][1]=# \\d fewgroups_many_rows\n Table \"public.fewgroups_many_rows\"\n┌────────┬─────────┬───────────┬──────────┬─────────┐\n│ Column │ Type │ Collation │ Nullable │ Default │\n├────────┼─────────┼───────────┼──────────┼─────────┤\n│ cat │ integer │ │ not null │ │\n│ val │ integer │ │ not null │ │\n└────────┴─────────┴───────────┴──────────┴─────────┘\n\npostgres[212666][1]=# explain SELECT cat, count(*) FROM fewgroups_many_rows GROUP BY 1;\n┌───────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────┤\n│ HashAggregate (cost=1942478.48..1942478.53 rows=5 width=12) │\n│ Group Key: cat │\n│ -> Seq Scan on fewgroups_many_rows (cost=0.00..1442478.32 rows=100000032 width=4) │\n└───────────────────────────────────────────────────────────────────────────────────────┘\n(3 rows)\n\nas 'val' is \"projected away\"..\n\n\nAfter neutering the tlist change, Jeff's patch and my changes to it\nyield performance *above* v12.\n\n\nI don't see why it's ok to force an additional projection in the very\ncommon case of hashaggs over a few rows. So I think we need to rethink\n4cad2534da6.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 12 Jun 2020 14:37:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Fri, 2020-06-12 at 14:37 -0700, Andres Freund wrote:\n> I don't see why it's ok to force an additional projection in the very\n> common case of hashaggs over a few rows. So I think we need to\n> rethink\n> 4cad2534da6.\n\nOne possibility is to project only spilled tuples, more similar to\nMelanie's patch from a while ago:\n\n\nhttps://www.postgresql.org/message-id/CAAKRu_aefEsv+UkQWqu+ioEnoiL2LJu9Diuh9BR8MbyXuZ0j4A@mail.gmail.com\n\nWhich makes sense, but it's also more code.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 12 Jun 2020 15:29:08 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 03:29:08PM -0700, Jeff Davis wrote:\n>On Fri, 2020-06-12 at 14:37 -0700, Andres Freund wrote:\n>> I don't see why it's ok to force an additional projection in the very\n>> common case of hashaggs over a few rows. So I think we need to\n>> rethink\n>> 4cad2534da6.\n>\n>One possibility is to project only spilled tuples, more similar to\n>Melanie's patch from a while ago:\n>\n>\n>https://www.postgresql.org/message-id/CAAKRu_aefEsv+UkQWqu+ioEnoiL2LJu9Diuh9BR8MbyXuZ0j4A@mail.gmail.com\n>\n>Which makes sense, but it's also more code.\n>\n\nI agree, we should revert 4cad2534da and only project tuples when we\nactually need to spill them. Did any of the WIP patches actually\nimplement that, or do we need to write that patch from scratch?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Jun 2020 01:06:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-13 01:06:25 +0200, Tomas Vondra wrote:\n> I agree, we should revert 4cad2534da and only project tuples when we\n> actually need to spill them.\n\nThere are cases where projecting helps for non-spilling aggregates too,\nbut only for the representative tuple. It doesn't help in the case at\nhand, because there's just 5 hashtable entries but millions of rows. So\nwe're unnecessarily projecting all-5 rows. But when there are many\ndifferent groups, it'd be different, because then the size of the\nrepresentative tuple can matter substantially.\n\nDo you think we should tackle this for 13? To me 4cad2534da seems like a\nsomewhat independent improvement to spillable hashaggs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 12 Jun 2020 17:12:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Fri, 2020-06-12 at 17:12 -0700, Andres Freund wrote:\n> Do you think we should tackle this for 13? To me 4cad2534da seems\n> like a\n> somewhat independent improvement to spillable hashaggs.\n\nWe've gone back and forth on this issue a few times, so let's try to\nget some agreement before we revert 4cad2534da. I added Robert because\nhe also seemed to think it was a reasonable idea.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Sat, 13 Jun 2020 11:48:09 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Sat, Jun 13, 2020 at 11:48:09AM -0700, Jeff Davis wrote:\n>On Fri, 2020-06-12 at 17:12 -0700, Andres Freund wrote:\n>> Do you think we should tackle this for 13? To me 4cad2534da seems\n>> like a\n>> somewhat independent improvement to spillable hashaggs.\n>\n>We've gone back and forth on this issue a few times, so let's try to\n>get some agreement before we revert 4cad2534da. I added Robert because\n>he also seemed to think it was a reasonable idea.\n>\n\nI can't speak for Robert, but I haven't expected the extra projection\nwould be this high. And I agree with Andres it's not very nice we have\nto do this even for aggregates with just a handful of groups that don't\nneed to spill.\n\nIn any case, I think we need to address this somehow for v13 - either we\nkeep the 4cad2534da patch in, or we tweak the cost model to reflect the\nextra I/O costs, or we project only when spilling.\n\nI'm not in a position to whip up a patch soon, though :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Jun 2020 22:19:41 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-12 15:29:08 -0700, Jeff Davis wrote:\n> On Fri, 2020-06-12 at 14:37 -0700, Andres Freund wrote:\n> > I don't see why it's ok to force an additional projection in the very\n> > common case of hashaggs over a few rows. So I think we need to\n> > rethink\n> > 4cad2534da6.\n> \n> One possibility is to project only spilled tuples, more similar to\n> Melanie's patch from a while ago:\n> \n> \n> https://www.postgresql.org/message-id/CAAKRu_aefEsv+UkQWqu+ioEnoiL2LJu9Diuh9BR8MbyXuZ0j4A@mail.gmail.com\n> \n> Which makes sense, but it's also more code.\n\nI'm somewhat inclined to think that we should revert 4cad2534da6 and\nthen look at how precisely to tackle this in 14.\n\nIt'd probably make sense to request small tlists when the number of\nestimated groups is large, and not otherwise.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 14 Jun 2020 11:14:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Sun, 2020-06-14 at 11:14 -0700, Andres Freund wrote:\n> I'm somewhat inclined to think that we should revert 4cad2534da6 and\n> then look at how precisely to tackle this in 14.\n\nI'm fine with that.\n\n> It'd probably make sense to request small tlists when the number of\n> estimated groups is large, and not otherwise.\n\nThat seems like a nice compromise that would be non-invasive, at least\nfor create_agg_plan().\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Sun, 14 Jun 2020 23:09:55 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Sun, Jun 14, 2020 at 11:09:55PM -0700, Jeff Davis wrote:\n>On Sun, 2020-06-14 at 11:14 -0700, Andres Freund wrote:\n>> I'm somewhat inclined to think that we should revert 4cad2534da6 and\n>> then look at how precisely to tackle this in 14.\n>\n>I'm fine with that.\n>\n\nI don't see how we could just revert 4cad2534d and leave this for v14.\n\nThe hashagg spilling is IMHO almost guaranteed to be a pain point for\nsome users, as it will force some queries to serialize large amounts of\ndata. Yes, some of this is a cost for hashagg enforcing work_mem at\nruntime, I'm fine with that. We'd get reports about that too, but we can\njustify that cost ...\n\nBut just reverting 4cad2534d will make this much worse, I think, as\nillustrated by the benchmarks I did in [1]. And no, this is not really\nfixable by tweaking the cost parameters - even with the current code\n(i.e. 4cad2534d in place) I had to increase random_page_cost to 60 on\nthe temp tablespace (on SATA RAID) to get good plans with parallelism\nenabled. I haven't tried, but I presume without 4cad2534d I'd have to\npush r_p_c even further ...\n\n[1] https://www.postgresql.org/message-id/20200519151202.u2p2gpiawoaznsv2%40development\n\n>> It'd probably make sense to request small tlists when the number of\n>> estimated groups is large, and not otherwise.\n>\n>That seems like a nice compromise that would be non-invasive, at least\n>for create_agg_plan().\n>\n\nMaybe. It'd certainly better than nothing. It's not clear to me what\nwould a good threshold be, though. And it's not going to handle cases of\nunder-estimates.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 15 Jun 2020 15:34:03 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 9:34 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> But just reverting 4cad2534d will make this much worse, I think, as\n> illustrated by the benchmarks I did in [1].\n\nI share this concern, although I do not know what we should do about it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jun 2020 11:19:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jun 15, 2020 at 9:34 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> But just reverting 4cad2534d will make this much worse, I think, as\n>> illustrated by the benchmarks I did in [1].\n\n> I share this concern, although I do not know what we should do about it.\n\nWell, it's only June. Let's put it on the open issues list for v13\nand continue to think about it. I concur that the hashagg spill patch\nhas made this something that we should worry about for v13, so just\nreverting without a better answer isn't very appetizing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jun 2020 12:46:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Mon, 2020-06-15 at 11:19 -0400, Robert Haas wrote:\n> On Mon, Jun 15, 2020 at 9:34 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> > But just reverting 4cad2534d will make this much worse, I think, as\n> > illustrated by the benchmarks I did in [1].\n> \n> I share this concern, although I do not know what we should do about\n> it.\n\nI attached an updated version of Melanie's patch, combined with the\nchanges to copy only the necessary attributes to a new slot before\nspilling. There are a couple changes:\n\n* I didn't see a reason to descend into a GroupingFunc node, so I\nremoved that.\n\n* I used a flag in the context rather than two separate callbacks to\nthe expression walker.\n\nThis patch gives the space benefits that we see on master, without the\nregression for small numbers of tuples. I saw a little bit of noise in\nmy test results, but I'm pretty sure it's a win all around. It could\nuse some review/cleanup though.\n\nRegards,\n\tJeff Davis",
"msg_date": "Mon, 15 Jun 2020 19:38:45 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 07:38:45PM -0700, Jeff Davis wrote:\n>On Mon, 2020-06-15 at 11:19 -0400, Robert Haas wrote:\n>> On Mon, Jun 15, 2020 at 9:34 AM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>> > But just reverting 4cad2534d will make this much worse, I think, as\n>> > illustrated by the benchmarks I did in [1].\n>>\n>> I share this concern, although I do not know what we should do about\n>> it.\n>\n>I attached an updated version of Melanie's patch, combined with the\n>changes to copy only the necessary attributes to a new slot before\n>spilling. There are a couple changes:\n>\n>* I didn't see a reason to descend into a GroupingFunc node, so I\n>removed that.\n>\n>* I used a flag in the context rather than two separate callbacks to\n>the expression walker.\n>\n>This patch gives the space benefits that we see on master, without the\n>regression for small numbers of tuples. I saw a little bit of noise in\n>my test results, but I'm pretty sure it's a win all around. It could\n>use some review/cleanup though.\n>\n\nLooks reasonable. I can't redo my tests at the moment, the machine is\nbusy with something else. I'll give it a try over the weekend.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 18:54:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Hi,\n\nI think it'd be good to get the changes that aren't related to\nprojection merged. As far as I can tell there's performance regressions\nboth because of the things I'd listed upthread, and due to the\nprojection issue. That's not obvious because because we first won\nperformance and then lost it again in several incremental steps.\n\nCOPY (SELECT (random() * 4)::int cat, (random()*10000)::int val FROM generate_series(1, 100000000)) TO '/tmp/data' WITH BINARY;\nBEGIN;\nDROP TABLE IF EXISTS fewgroups_many_rows;\nCREATE TABLE fewgroups_many_rows(cat int4 not null, val int4 not null);\nCOPY fewgroups_many_rows FROM '/tmp/data' WITH (FORMAT BINARY, FREEZE);\nCOMMIT;\nVACUUM FREEZE fewgroups_many_rows;\n\nTest prep:\n\nTest query:\nSET seed=0;SELECT cat, count(*) FROM fewgroups_many_rows GROUP BY 1;\n(the seed seems to reduce noise due to hashtable iv being the same)\n\nbest of six:\n9e1c9f959422192bbe1b842a2a1ffaf76b080196 12031.906 ms\nd52eaa094847d395f942827a6f413904e516994c 12045.487 ms\nac88807f9b227ddcd92b8be9a053094837c1b99a 11950.006 ms\n36d22dd95bc87ca68e742da91f47f8826f8758c9 11769.991 ms\n5ac4e9a12c6543414891cd8972b2cd36a08e40cc\t11551.932 ms\n1fdb7f9789c4550204cd62d1746a7deed1dc4c29 11706.948 ms\n4eaea3db150af56aa2e40efe91997fd25f3b6d73 11999.908 ms\n11de6c903da99a4b2220acfa776fc26c7f384ccc 11999.054 ms\nb7fabe80df9a65010bfe5e5d0a979bacebfec382 12165.463 ms\n2742c45080077ed3b08b810bb96341499b86d530 12137.505 ms\n1f39bce021540fde00990af55b4432c55ef4b3c7 12501.764 ms\n9b60c4b979bce060495e2b05ba01d1cc6bffdd2d 12389.047 ms\n4cad2534da6d17067d98cf04be2dfc1bda8f2cd0 13319.786 ms\n1b2c29469a58cd9086bd86e20c708eb437564a80 13330.616 ms\n\nThere's certainly some noise in here, but I think the trends are valid.\n\n\n> /*\n> - * find_unaggregated_cols\n> - *\t Construct a bitmapset of the column numbers of un-aggregated Vars\n> - *\t appearing in our targetlist and qual (HAVING clause)\n> + * Walk tlist and qual to find referenced colnos, dividing them into\n> + * aggregated and unaggregated sets.\n> */\n> -static Bitmapset *\n> -find_unaggregated_cols(AggState *aggstate)\n> +static void\n> +find_cols(AggState *aggstate, Bitmapset **aggregated, Bitmapset **unaggregated)\n> {\n\nIt's not this patch's fault, but none, really none, of this stuff should\nbe in the executor.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Jun 2020 21:01:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 9:02 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > /*\n> > - * find_unaggregated_cols\n> > - * Construct a bitmapset of the column numbers of un-aggregated Vars\n> > - * appearing in our targetlist and qual (HAVING clause)\n> > + * Walk tlist and qual to find referenced colnos, dividing them into\n> > + * aggregated and unaggregated sets.\n> > */\n> > -static Bitmapset *\n> > -find_unaggregated_cols(AggState *aggstate)\n> > +static void\n> > +find_cols(AggState *aggstate, Bitmapset **aggregated, Bitmapset\n> **unaggregated)\n> > {\n>\n> It's not this patch's fault, but none, really none, of this stuff should\n> be in the executor.\n>\n>\nWere you thinking it could be done in grouping_planner() and then the\nbitmaps could be saved in the PlannedStmt?\nOr would you have to wait until query_planner()? Or are you imagining\nsomewhere else entirely?\n\n-- \nMelanie Plageman\n\nOn Mon, Jun 22, 2020 at 9:02 PM Andres Freund <andres@anarazel.de> wrote:\n> /*\n> - * find_unaggregated_cols\n> - * Construct a bitmapset of the column numbers of un-aggregated Vars\n> - * appearing in our targetlist and qual (HAVING clause)\n> + * Walk tlist and qual to find referenced colnos, dividing them into\n> + * aggregated and unaggregated sets.\n> */\n> -static Bitmapset *\n> -find_unaggregated_cols(AggState *aggstate)\n> +static void\n> +find_cols(AggState *aggstate, Bitmapset **aggregated, Bitmapset **unaggregated)\n> {\n\nIt's not this patch's fault, but none, really none, of this stuff should\nbe in the executor.\nWere you thinking it could be done in grouping_planner() and then thebitmaps could be saved in the PlannedStmt?Or would you have to wait until query_planner()? Or are you imaginingsomewhere else entirely?-- Melanie Plageman",
"msg_date": "Tue, 23 Jun 2020 09:23:57 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-23 09:23:57 -0700, Melanie Plageman wrote:\n> On Mon, Jun 22, 2020 at 9:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > It's not this patch's fault, but none, really none, of this stuff should\n> > be in the executor.\n> >\n> >\n> Were you thinking it could be done in grouping_planner() and then the\n> bitmaps could be saved in the PlannedStmt?\n> Or would you have to wait until query_planner()? Or are you imagining\n> somewhere else entirely?\n\nI haven't thought about it in too much detail, but I would say\ncreate_agg_plan() et al. I guess there's some argument to be made to do\nit in setrefs.c, because we already do convert_combining_aggrefs() there\n(but I don't like that much).\n\nThere's no reason to do it before we actually decided on one specific\npath, so doing it earlier than create_plan() seems unnecessary. And\nhaving it in agg specific code seems better than putting it into global\nroutines.\n\nThere's probably an argument for having a bit more shared code between\ncreate_agg_plan(), create_group_plan() and\ncreate_groupingsets_plan(). But even just adding a new extract_*_cols()\ncall to each of those would probably be ok.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Jun 2020 10:06:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Tue, Jun 23, 2020 at 10:06 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-06-23 09:23:57 -0700, Melanie Plageman wrote:\n> > On Mon, Jun 22, 2020 at 9:02 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > It's not this patch's fault, but none, really none, of this stuff\n> should\n> > > be in the executor.\n> > >\n> > >\n> > Were you thinking it could be done in grouping_planner() and then the\n> > bitmaps could be saved in the PlannedStmt?\n> > Or would you have to wait until query_planner()? Or are you imagining\n> > somewhere else entirely?\n>\n> I haven't thought about it in too much detail, but I would say\n> create_agg_plan() et al. I guess there's some argument to be made to do\n> it in setrefs.c, because we already do convert_combining_aggrefs() there\n> (but I don't like that much).\n>\n> There's no reason to do it before we actually decided on one specific\n> path, so doing it earlier than create_plan() seems unnecessary. And\n> having it in agg specific code seems better than putting it into global\n> routines.\n>\n> There's probably an argument for having a bit more shared code between\n> create_agg_plan(), create_group_plan() and\n> create_groupingsets_plan(). But even just adding a new extract_*_cols()\n> call to each of those would probably be ok.\n>\n>\nSo, my summary of this point in the context of the other discussion\nupthread is:\n\nPlanner should extract the columns that hashagg will need later during\nplanning. Planner should not have HashAgg/MixedAgg nodes request smaller\ntargetlists from their children with CP_SMALL_TLIST to avoid unneeded\nprojection overhead.\nAlso, even this extraction should only be done when the number of groups\nis large enough to suspect a spill.\n\nSo, I wrote a patch that extracts the columns the same way as in\nExecInitAgg but in create_agg_plan() and it doesn't work because we\nhaven't called set_plan_references().\n\nThen, I wrote a patch that does this in set_upper_references(), and it\nseems to work. I've attached that one.\nIt is basically Jeff's patch (based somewhat on my patch) which extracts\nthe columns in ExecInitAgg but I moved the functions over to setrefs.c\nand gave them a different name.\n\nIt's not very elegant.\nI shoved it in at the end of set_upper_references(), but I think there\nshould be a nice way to do it while setting the references for each var\ninstead of walking over the nodes again.\nAlso, I think that the bitmapsets for the colnos should maybe be put\nsomewhere less prominent (than in the main Agg plan node?), since they\nare only used in one small place.\nI tried putting both bitmaps in an array of two bitmaps in the Agg node\n(since there will always be two) to make it look a bit neater, but it\nwas pretty confusing and error prone to remember which one was\naggregated and which one was unaggregated.\n\nNote that I didn't do anything with costing like only extracting the\ncolumns if there are a lot of groups.\n\nAlso, I didn't revert the CP_SMALL_TLIST change in create_agg_plan() or\ncreate_groupingsets_plan().\n\nNot to stir the pot, but I did notice that hashjoin uses CP_SMALL_TLIST\nin create_hashjoin_plan() for the inner side sub-tree and the outer side\none if there are multiple batches. I wondered what was different about\nthat vs hashagg (i.e. why it is okay to do that there).\n\n-- \nMelanie Plageman",
"msg_date": "Wed, 24 Jun 2020 17:26:07 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Wed, Jun 24, 2020 at 05:26:07PM -0700, Melanie Plageman wrote:\n>On Tue, Jun 23, 2020 at 10:06 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2020-06-23 09:23:57 -0700, Melanie Plageman wrote:\n>> > On Mon, Jun 22, 2020 at 9:02 PM Andres Freund <andres@anarazel.de>\n>> wrote:\n>> > > It's not this patch's fault, but none, really none, of this stuff\n>> should\n>> > > be in the executor.\n>> > >\n>> > >\n>> > Were you thinking it could be done in grouping_planner() and then the\n>> > bitmaps could be saved in the PlannedStmt?\n>> > Or would you have to wait until query_planner()? Or are you imagining\n>> > somewhere else entirely?\n>>\n>> I haven't thought about it in too much detail, but I would say\n>> create_agg_plan() et al. I guess there's some argument to be made to do\n>> it in setrefs.c, because we already do convert_combining_aggrefs() there\n>> (but I don't like that much).\n>>\n>> There's no reason to do it before we actually decided on one specific\n>> path, so doing it earlier than create_plan() seems unnecessary. And\n>> having it in agg specific code seems better than putting it into global\n>> routines.\n>>\n>> There's probably an argument for having a bit more shared code between\n>> create_agg_plan(), create_group_plan() and\n>> create_groupingsets_plan(). But even just adding a new extract_*_cols()\n>> call to each of those would probably be ok.\n>>\n>>\n>So, my summary of this point in the context of the other discussion\n>upthread is:\n>\n>Planner should extract the columns that hashagg will need later during\n>planning. Planner should not have HashAgg/MixedAgg nodes request smaller\n>targetlists from their children with CP_SMALL_TLIST to avoid unneeded\n>projection overhead.\n>Also, even this extraction should only be done when the number of groups\n>is large enough to suspect a spill.\n>\n\nIMO we should extract the columns irrespectedly of the estimates,\notherwise we won't be able to handle underestimates efficiently.\n\n>\n>Not to stir the pot, but I did notice that hashjoin uses CP_SMALL_TLIST\n>in create_hashjoin_plan() for the inner side sub-tree and the outer side\n>one if there are multiple batches. I wondered what was different about\n>that vs hashagg (i.e. why it is okay to do that there).\n>\n\nYeah. That means that if we have to start batching during execution, we\nmay need to spill much more datai. I'd say that's a hashjoin issue that\nwe should fix too (in v14).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Jun 2020 11:10:24 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-12 14:37:15 -0700, Andres Freund wrote:\n> On 2020-06-11 11:14:02 -0700, Jeff Davis wrote:\n> > On Thu, 2020-06-11 at 10:45 -0700, Andres Freund wrote:\n> > > Did you run any performance tests?\n> >\n> > Yes, I reproduced your ~12% regression from V12, and this patch nearly\n> > eliminated it for me.\n> \n> I spent a fair bit of time looking at the difference. Jeff had let me\n> know on chat that he was still seeing some difference, but couldn't\n> quite figure out where that was.\n> \n> Trying it out myself, I observed that the patch helped, but not that\n> much. After a bit I found one major reason for why:\n> LookupTupleHashEntryHash() assigned the hash to pointer provided by the\n> caller's before doing the insertion. That ended up causing a pipeline\n> stall (I assume it's store forwarding, but not sure). Moving the\n> assignment to the caller variable to after the insertion got rid of\n> that.\n\nThis is still not resolved. We're right now slower than 12. It's\neffectively not possible to do performance comparisons right now. This\nwas nearly two months ago.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 Jul 2020 16:51:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Fri, Jul 24, 2020 at 4:51 PM Andres Freund <andres@anarazel.de> wrote:\n> This is still not resolved. We're right now slower than 12. It's\n> effectively not possible to do performance comparisons right now. This\n> was nearly two months ago.\n\nI have added a new open item for this separate\nLookupTupleHashEntryHash()/lookup_hash_entry() pipeline-stall issue.\n\n(For the record I mistakenly believed that commit 23023022 resolved\nall of the concerns raised on this thread, which is why I closed out\nthe open item associated with this thread. Evidently work remains to\nfix a remaining regression that affects simple in-memory hash\naggregation, though.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 25 Jul 2020 12:41:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Sat, Jul 25, 2020 at 12:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I have added a new open item for this separate\n> LookupTupleHashEntryHash()/lookup_hash_entry() pipeline-stall issue.\n\nAttached is a rebased version of Andres' now-bitrot 2020-06-12 patch\n(\"aggspeed.diff\").\n\nI find that Andres original \"SELECT cat, count(*) FROM\nfewgroups_many_rows GROUP BY 1;\" test case is noticeably improved by\nthe patch. Without the patch, v13 takes ~11.46 seconds. With the\npatch, it takes only ~10.64 seconds.\n\nDidn't test it against v12 yet, but I have no reason to doubt Andres'\nexplanation. I gather that if we can get this patch committed, we can\nclose the relevant LookupTupleHashEntryHash() open item.\n\nCan you take this off my hands, Jeff?\n\nThanks\n-- \nPeter Geoghegan",
"msg_date": "Sat, 25 Jul 2020 15:08:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Sat, 2020-07-25 at 15:08 -0700, Peter Geoghegan wrote:\n> I find that Andres original \"SELECT cat, count(*) FROM\n> fewgroups_many_rows GROUP BY 1;\" test case is noticeably improved by\n> the patch. Without the patch, v13 takes ~11.46 seconds. With the\n> patch, it takes only ~10.64 seconds.\n\nI saw less of an improvement than you or Andres (perhaps just more\nnoise). But given that both you and Andres are reporting a measurable\nimprovement, then I went ahead and committed it. Thank you.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Sun, 26 Jul 2020 16:17:38 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
},
{
"msg_contents": "On Sun, Jul 26, 2020 at 4:17 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I saw less of an improvement than you or Andres (perhaps just more\n> noise). But given that both you and Andres are reporting a measurable\n> improvement, then I went ahead and committed it. Thank you.\n\nThanks!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Jul 2020 08:30:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: hashagg slowdown due to spill changes"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're carrying a bunch of obsolete and in one case insecure advice on\nkernel settings. Here's an attempt to clean some of that up.\n\nLinux:\n * Drop reference to ancient systems that didn't have a sysctl command.\n * Drop references to Linux before 2.6.\n * I was tempted to remove the reference to oom_adj, which was\napparently deprecated from 2.6.36, but that's probably recent enough\nto keep (RHEL6 may outlive humanity).\n\nmacOS:\n * Drop reference to 10.2 and 10.3 systems. That's 15-16 years ago.\nEven the ancient PPC systems in the build farm run 10.4+.\n\nFreeBSD:\n * Drop insecure and outdated jail instructions. I moved the\npre-FreeBSD 11 behaviour into a brief note in parentheses, because\nFreeBSD 11 is the oldest release of that OS that is still in support.\nIn that parenthetical note, I dropped the reference to port numbers\nand UIDs in shmem keys since we now use pgdata inode numbers instead.\n * Drop SysV semaphore instruction. We switched to POSIX on this\nplatform in PostgreSQL 10, and we don't bother to give the redundant\ninstructions about semaphores for Linux so we might as well drop this\nnoise for FreeBSD too.\n * Clarify that kern.ipc.shm_use_phys only has a useful effect if\nshared_memory_type=sysv, which is not the default.\n * Drop some stuff about pre-4.0 systems. That was 20 years ago.\n\nNetBSD:\n * Drop reference to pre-5.0 systems. That was 11 years ago. Maybe\nsomeone wants to argue with me on this one?\n\nOpenBSD:\n * Drop instruction on recompiling the kernel on pre-3.3 systems.\nThat was 17 years ago.\n\nSolaris/illumos:\n * Drop instructions on Solaris 6-9 systems. 10 came out 15 years\nago, 9 was fully desupported 6 years ago. The last person to mention\nSolaris 9 on the mailing list was ... me. That machine had cobwebs\neven then.\n * Drop reference to OpenSolaris, which was cancelled ten years ago;\nthe surviving project goes by illumos, so use that name.\n\nAIX:\n * Drop reference to 5.1, since there is no way older systems than\nthat are going to be running new PostgreSQL releases. 5.1 itself was\ndesupported by IBM 14 years ago.\n\nHP-UX:\n * Drop advice for v10. 11.x came out 23 years ago.\n\nIt's a bit inconsistent that we bother to explain the SysV shmem\nsysctls on some systems but not others, just because once upon a time\nit was necessary to tweak them on some systems and not others due to\ndefaults. You shouldn't need that anywhere now IIUC, unless you run a\nlot of clusters or use shared_memory_type=sysv. I'm not proposing to\nadd it where it's missing, as I don't have the information and I doubt\nit's really useful anyway; you can find that stuff elsewhere if you\nreally need it.",
"msg_date": "Sat, 6 Jun 2020 16:57:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Vacuuming the operating system documentation"
},
{
"msg_contents": "On 2020-06-06 06:57, Thomas Munro wrote:\n> We're carrying a bunch of obsolete and in one case insecure advice on\n> kernel settings. Here's an attempt to clean some of that up.\n\nThese changes seem sensible to me.\n\n> HP-UX:\n> * Drop advice for v10. 11.x came out 23 years ago.\n\nWe still have a version 10 in the build farm. :)\n\n> It's a bit inconsistent that we bother to explain the SysV shmem\n> sysctls on some systems but not others, just because once upon a time\n> it was necessary to tweak them on some systems and not others due to\n> defaults. You shouldn't need that anywhere now IIUC, unless you run a\n> lot of clusters or use shared_memory_type=sysv. I'm not proposing to\n> add it where it's missing, as I don't have the information and I doubt\n> it's really useful anyway; you can find that stuff elsewhere if you\n> really need it.\n\nWhen this was a serious hurdle on the olden days, we added as much \ninformation as possible. I agree we can trim it now or let it age out.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 6 Jun 2020 09:58:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-06-06 06:57, Thomas Munro wrote:\n>> We're carrying a bunch of obsolete and in one case insecure advice on\n>> kernel settings. Here's an attempt to clean some of that up.\n\n> These changes seem sensible to me.\n\n+1\n\n>> HP-UX:\n>> * Drop advice for v10. 11.x came out 23 years ago.\n\n> We still have a version 10 in the build farm. :)\n\nYeah, but I don't need advice on installing PG on that ;-). In general,\nI think the filter rule could be: is it likely that someone would try to\ninstall PG 13-or-later from scratch (with no pre-existing installation)\non this OS version? If there is a pre-existing install, they'll already\nhave dealt with any kernel configuration issues.\n\nSo I concur with dropping all this stuff, and while we're at it I'd vote\nfor getting rid of the oom_adj para. RHEL6 will be fully EOL around the\ntime PG13 comes out, so I don't believe anyone's making brand new installs\nthere either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 10:41:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "On Sat, Jun 6, 2020 at 4:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > On 2020-06-06 06:57, Thomas Munro wrote:\n> >> We're carrying a bunch of obsolete and in one case insecure advice on\n> >> kernel settings. Here's an attempt to clean some of that up.\n>\n> > These changes seem sensible to me.\n>\n> +1\n>\n\n+1 as well.\n\n\n>> HP-UX:\n> >> * Drop advice for v10. 11.x came out 23 years ago.\n>\n> > We still have a version 10 in the build farm. :)\n>\n> Yeah, but I don't need advice on installing PG on that ;-). In general,\n> I think the filter rule could be: is it likely that someone would try to\n> install PG 13-or-later from scratch (with no pre-existing installation)\n> on this OS version? If there is a pre-existing install, they'll already\n> have dealt with any kernel configuration issues.\n>\n> So I concur with dropping all this stuff, and while we're at it I'd vote\n> for getting rid of the oom_adj para. RHEL6 will be fully EOL around the\n> time PG13 comes out, so I don't believe anyone's making brand new installs\n> there either.\n>\n>\nLet's hope PG13 isn't that late -- the end of Extended Lifecycle Support is\nJune 30, 2024 for RHEL 6. (It *enters* ELS around the time of pg 13).\n\nAnd yes, given that, you'd be surprised how many people make brand new\ninstalls on that. That said, they *shoudln't*, so I'm fine with dropping\nthe instructions for those as well. With luck it might encourage some\npeople to realize it's a bad idea...\n\n//Magnus\n\nOn Sat, Jun 6, 2020 at 4:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-06-06 06:57, Thomas Munro wrote:\n>> We're carrying a bunch of obsolete and in one case insecure advice on\n>> kernel settings. Here's an attempt to clean some of that up.\n\n> These changes seem sensible to me.\n\n+1+1 as well.\n>> HP-UX:\n>> * Drop advice for v10. 11.x came out 23 years ago.\n\n> We still have a version 10 in the build farm. :)\n\nYeah, but I don't need advice on installing PG on that ;-). In general,\nI think the filter rule could be: is it likely that someone would try to\ninstall PG 13-or-later from scratch (with no pre-existing installation)\non this OS version? If there is a pre-existing install, they'll already\nhave dealt with any kernel configuration issues.\n\nSo I concur with dropping all this stuff, and while we're at it I'd vote\nfor getting rid of the oom_adj para. RHEL6 will be fully EOL around the\ntime PG13 comes out, so I don't believe anyone's making brand new installs\nthere either.Let's hope PG13 isn't that late -- the end of Extended Lifecycle Support is June 30, 2024 for RHEL 6. (It *enters* ELS around the time of pg 13).And yes, given that, you'd be surprised how many people make brand new installs on that. That said, they *shoudln't*, so I'm fine with dropping the instructions for those as well. With luck it might encourage some people to realize it's a bad idea...//Magnus",
"msg_date": "Sat, 6 Jun 2020 16:57:24 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Sat, Jun 6, 2020 at 4:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So I concur with dropping all this stuff, and while we're at it I'd vote\n>> for getting rid of the oom_adj para. RHEL6 will be fully EOL around the\n>> time PG13 comes out, so I don't believe anyone's making brand new installs\n>> there either.\n\n> Let's hope PG13 isn't that late -- the end of Extended Lifecycle Support is\n> June 30, 2024 for RHEL 6. (It *enters* ELS around the time of pg 13).\n\nELS basically means that they aren't going to take down the existing\nwebsite information about RHEL6 just yet. I quote from the EOL notice\nI got last December:\n\n This is the one year retirement notice for Red Hat Enterprise Linux 6\n Maintenance Support 2 (Product Retirement) Phase. This notification\n applies only to those customers subscribed to minor releases for Red\n Hat Enterprise Linux 6.\n\n In accordance with the Red Hat Enterprise Linux Errata Support Policy,\n Red Hat Enterprise Linux 6 will be retired as of November 30, 2020 and\n enter Extended Life Phase which means users will receive the below\n support.\n\n ? Limited technical support for existing Red Hat Enterprise Linux 6\n deployments.\n ? Previously released bug fixes (RHBAs), security errata (RHSAs), and\n product enhancements (RHEAs).\n ? Red Hat Knowledgebase and other content (white papers, reference \n architectures, etc.) found in the Red Hat Customer Portal.\n ? Red Hat Enterprise Linux 6 documentation.\n\nThere won't be any new bug or security fixes after December; the above is\nonly saying that existing updates will still be available to download.\n(I'm not sure what \"limited technical support\" really means, but I bet\nit involves forking over additional per-incident money.)\n\n From our own perspective, we no longer have the ability to support PG\non RHEL6 anyway. I see no RHEL6 machines in the buildfarm, and my own\ninstallation is on a disk that's not even connected to anything anymore.\nSo we might as well stop giving the impression that it's supported.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 11:14:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "On 2020-06-06 17:14, Tom Lane wrote:\n>> Let's hope PG13 isn't that late -- the end of Extended Lifecycle Support is\n>> June 30, 2024 for RHEL 6. (It*enters* ELS around the time of pg 13).\n> ELS basically means that they aren't going to take down the existing\n> website information about RHEL6 just yet.\n\nHmm, we removed support for RHEL 5 in PG 13, partially based on the \ninformation that ELS for RHEL 5 ends in November 2020. It appears we \nhave misinterpreted that and we can trim the trailing edge more \naggressively.\n\nAnyway, this is only a documentation patch. Surely no one will doing \ntheir very first install of Postgres on an unconfigured RHEL 6 this year.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 6 Jun 2020 18:35:29 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "On Sat, Jun 6, 2020 at 6:35 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-06-06 17:14, Tom Lane wrote:\n> >> Let's hope PG13 isn't that late -- the end of Extended Lifecycle\n> Support is\n> >> June 30, 2024 for RHEL 6. (It*enters* ELS around the time of pg 13).\n> > ELS basically means that they aren't going to take down the existing\n> > website information about RHEL6 just yet.\n>\n> Hmm, we removed support for RHEL 5 in PG 13, partially based on the\n> information that ELS for RHEL 5 ends in November 2020. It appears we\n> have misinterpreted that and we can trim the trailing edge more\n> aggressively.\n>\n> Anyway, this is only a documentation patch. Surely no one will doing\n> their very first install of Postgres on an unconfigured RHEL 6 this year.\n>\n\nOh they absolutely will. But most likely they will also use an older\nversion of PostgreSQL because that's what their enterprise product\nsupports. And we're not talking about removing the documentation from the\nold version (I'm assuming).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Jun 6, 2020 at 6:35 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-06-06 17:14, Tom Lane wrote:\n>> Let's hope PG13 isn't that late -- the end of Extended Lifecycle Support is\n>> June 30, 2024 for RHEL 6. (It*enters* ELS around the time of pg 13).\n> ELS basically means that they aren't going to take down the existing\n> website information about RHEL6 just yet.\n\nHmm, we removed support for RHEL 5 in PG 13, partially based on the \ninformation that ELS for RHEL 5 ends in November 2020. It appears we \nhave misinterpreted that and we can trim the trailing edge more \naggressively.\n\nAnyway, this is only a documentation patch. Surely no one will doing \ntheir very first install of Postgres on an unconfigured RHEL 6 this year.Oh they absolutely will. But most likely they will also use an older version of PostgreSQL because that's what their enterprise product supports. And we're not talking about removing the documentation from the old version (I'm assuming).-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 6 Jun 2020 18:38:52 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "On Sun, Jun 7, 2020 at 4:39 AM Magnus Hagander <magnus@hagander.net> wrote:>\n> Oh they absolutely will. But most likely they will also use an older version of PostgreSQL because that's what their enterprise product supports. And we're not talking about removing the documentation from the old version (I'm assuming).\n\nYeah, I wasn't planning on changing anything in backbranches. It\nsounds like we're OK with doing this for 13. Here's a version with a\nfew more changes:\n\n * Drop mention of Linux oom_adj, per discussion.\n * Add paragraphs to each OS to point out what we actually expect you\nto need to change (ie mostly nothing).\n * Drop mention of PG 9.2's requirements for more SysV shmem. It made\nsense to have that in there while versions with both behaviours were\nstill in circulation and you could have been looking at the wrong\nversion's manual, but that's stuff you can find in old release notes\nif you're a historian.\n * Drop the paragraph that tells you what Linux's default SHMMAX is:\nthat has been wrong since 3.16. The default is now sky high, a bit\nunder ULONG_MAX.\n * Drop the alternative way to set SHMMAX etc via /proc on Linux.\nThere's hardly any reason to do it at all, so describing two ways is\njust wasting pixels.\n * Drop some more comments about ancient macOS.\n * Adjust the text that discusses adjusting shared_buffers if you\ncan't acquire enough SysV shmem, because that only makes sense if\nshared_memory_type=sysv.\n * Point out that NetBSD's kern.ipc.shm_use_phys only applies to SysV\nmemory, as done for FreeBSD in the previous version. I hadn't noticed\nthat NetBSD has that too, and I peeked at the source to check that\nthey only use that for SysV memory too.\n * Drop the text about recognising and reconfiguring kernels that were\nbuilt without SysV support; that's advice from another age. Regular\nusers don't configure and build kernels, and those that do that don't\nneed these hints. I am aware of one modern kernel that ships\npre-built without SysV IPC: Android, but apparently this stuff is also\nmissing from its libc so you won't get this far.",
"msg_date": "Sun, 7 Jun 2020 14:52:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Yeah, I wasn't planning on changing anything in backbranches. It\n> sounds like we're OK with doing this for 13. Here's a version with a\n> few more changes:\n\nLooks pretty good to me. I attach a delta patch with a few more\nproposed adjustments. Notably, I made the wording about /etc/sysctl.conf\nfor Linux match that for other OSes, except I said \"see\" not\n\"modify\" because (at least on my Red Hat based installations)\nthe comments in /etc/sysctl.conf direct you to modify various\nsub-files.\n\n> ... I am aware of one modern kernel that ships\n> pre-built without SysV IPC: Android, but apparently this stuff is also\n> missing from its libc so you won't get this far.\n\nYeah, ISTR some prior discussion about that on our lists.\nIf anyone's trying to run PG on their phone, they probably\ndo not need help from these docs.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 06 Jun 2020 23:42:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "On Sun, Jun 7, 2020 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Yeah, I wasn't planning on changing anything in backbranches. It\n> > sounds like we're OK with doing this for 13. Here's a version with a\n> > few more changes:\n>\n> Looks pretty good to me. I attach a delta patch with a few more\n> proposed adjustments. Notably, I made the wording about /etc/sysctl.conf\n> for Linux match that for other OSes, except I said \"see\" not\n> \"modify\" because (at least on my Red Hat based installations)\n> the comments in /etc/sysctl.conf direct you to modify various\n> sub-files.\n\nThanks. Pushed.\n\nOne more thing I spotted, post commit: the example symptom of\nsystemd's RemoveIPC feature trashing your cluster is an error from\nsemctl(), but that can't happen anymore on a standard build. Not sure\nwhat to put in its place... I guess the remaining symptoms would be\n(1) the little \"interlock\" shmem segment is unregistered, which is\nprobably symptom-free (until you start a second postmaster in the same\npgdata), and (2) POSIX shm objects getting unlinked underneath a\nparallel query. That's probably what this build farm animal was\ntelling me:\n\nhttps://www.postgresql.org/message-id/CA+hUKG+t40GoUczAhQsRhxWeS=fsZXpObyojboUTN6BEOfUj4Q@mail.gmail.com\n\n\n",
"msg_date": "Sun, 7 Jun 2020 23:03:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> One more thing I spotted, post commit: the example symptom of\n> systemd's RemoveIPC feature trashing your cluster is an error from\n> semctl(), but that can't happen anymore on a standard build.\n\nGood point.\n\n> Not sure\n> what to put in its place... I guess the remaining symptoms would be\n> (1) the little \"interlock\" shmem segment is unregistered, which is\n> probably symptom-free (until you start a second postmaster in the same\n> pgdata), and (2) POSIX shm objects getting unlinked underneath a\n> parallel query.\n\n(1) would be very scary, because the \"symptom\" would be \"second postmaster\nsuccessfully starts and trashes your database\". But our previous\ndiscussion found that that won't happen, because systemd notices the\nsegment's positive nattch count. Unfortunately it seems there's nothing\nequivalent for POSIX shmem, so (2) is possible. See\n\nhttps://www.postgresql.org/message-id/5915.1481218827%40sss.pgh.pa.us\n\nRelevant to the current discussion: this creates a possible positive\nreason for setting dynamic_shared_memory_type to \"sysv\", namely if it's\nthe best available way to get around RemoveIPC in a particular situation.\nShould we document that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Jun 2020 11:00:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "On Mon, Jun 8, 2020 at 3:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Not sure\n> > what to put in its place... I guess the remaining symptoms would be\n> > (1) the little \"interlock\" shmem segment is unregistered, which is\n> > probably symptom-free (until you start a second postmaster in the same\n> > pgdata), and (2) POSIX shm objects getting unlinked underneath a\n> > parallel query.\n>\n> (1) would be very scary, because the \"symptom\" would be \"second postmaster\n> successfully starts and trashes your database\". But our previous\n> discussion found that that won't happen, because systemd notices the\n> segment's positive nattch count. Unfortunately it seems there's nothing\n> equivalent for POSIX shmem, so (2) is possible. See\n\nAh, I see. Ok, I propose we update the example symptom to (2), and\nback-patch to 10. See attached.\n\n> https://www.postgresql.org/message-id/5915.1481218827%40sss.pgh.pa.us\n>\n> Relevant to the current discussion: this creates a possible positive\n> reason for setting dynamic_shared_memory_type to \"sysv\", namely if it's\n> the best available way to get around RemoveIPC in a particular situation.\n> Should we document that?\n\nDoesn't seem worth the trouble, especially since the real solution is\nto tell systemd to back off by one of the two methods described.\nAlso, I guess there's a moment between shmget() and shmat() when a\nnewborn SysV DSM segment has nattch == 0.",
"msg_date": "Mon, 8 Jun 2020 11:54:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Jun 8, 2020 at 3:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... Unfortunately it seems there's nothing\n>> equivalent for POSIX shmem, so (2) is possible. See\n\n> Ah, I see. Ok, I propose we update the example symptom to (2), and\n> back-patch to 10. See attached.\n\n+1, except s/attemping/attempting/\n\n>> Relevant to the current discussion: this creates a possible positive\n>> reason for setting dynamic_shared_memory_type to \"sysv\", namely if it's\n>> the best available way to get around RemoveIPC in a particular situation.\n>> Should we document that?\n\n> Doesn't seem worth the trouble, especially since the real solution is\n> to tell systemd to back off by one of the two methods described.\n\nAgreed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Jun 2020 20:03:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "On 2020-06-07 17:00, Tom Lane wrote:\n> Relevant to the current discussion: this creates a possible positive\n> reason for setting dynamic_shared_memory_type to \"sysv\", namely if it's\n> the best available way to get around RemoveIPC in a particular situation.\n> Should we document that?\n\nIt sounds like both shared_memory_type and dynamic_shared_memory_type \nought to default to \"sysv\" on Linux.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 07:44:06 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-06-07 17:00, Tom Lane wrote:\n>> Relevant to the current discussion: this creates a possible positive\n>> reason for setting dynamic_shared_memory_type to \"sysv\", namely if it's\n>> the best available way to get around RemoveIPC in a particular situation.\n>> Should we document that?\n\n> It sounds like both shared_memory_type and dynamic_shared_memory_type \n> ought to default to \"sysv\" on Linux.\n\nPer the discussion in the older thread, that would only fix things if we\nheld at least one attach count constantly on every shared segment. IIUC,\nthat's not guaranteed for DSAs. So changing dynamic_shared_memory_type\nwould reduce the risk but not really fix anything.\n\nFor the primary shm segment, we don't (without EXEC_BACKEND) really\ncare if somebody unlinks the file prematurely, since backends inherit\nthe mapping via fork. Hence, no need to change shared_memory_type.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jun 2020 09:51:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming the operating system documentation"
}
] |
[
{
"msg_contents": "I'm somewhat confused by the selection and order of the output columns\nproduced by the new psql commands \\dAo and \\dAp (access method operators\nand functions, respectively). Currently, you get\n\n\\dAo\n\n AM | Operator family | Operator\n-----+-----------------+----------------------\n gin | jsonb_path_ops | @> (jsonb, jsonb)\n...\n\n\\dAo+\n\n\\dAo+\n List of operators of operator families\n AM | Operator family | Operator | Strategy | Purpose | Sort opfamily\n-------+-----------------+-----------------------------------------+----------+---------+---------------\n btree | float_ops | < (double precision, double precision) | 1 | search |\n...\n\n\\dAp\n List of support functions of operator families\n AM | Operator family | Left arg type | Right arg type | Number | Function\n-------+-----------------+------------------+------------------+--------+---------------------\n btree | float_ops | double precision | double precision | 1 | btfloat8cmp\n...\n\nFirst, why isn't the strategy number included in the \\dAo? It's part\nof the primary key of pg_amop, and it's essential for interpreting the\nmeaning of the output.\n\nThen there are gratuitous differences in the presentation of \\dAo and \\dAp.\nWhy does \\dAo show the operator with signature and omit the left arg/right arg\ncolumns, but \\dAp shows it the other way around?\n\nI'm also wondering whether this is fully correct. Would it be possible for the\nargument types of the operator/function to differ from the left arg/right arg\ntypes? (Perhaps binary compatible?)\n\nEither way some more consistency would be welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 6 Jun 2020 19:15:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "output columns of \\dAo and \\dAp"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I'm also wondering whether this is fully correct. Would it be possible for the\n> argument types of the operator/function to differ from the left arg/right arg\n> types? (Perhaps binary compatible?)\n\npg_amop.h specifies that\n\n * The primary key for this table is <amopfamily, amoplefttype, amoprighttype,\n * amopstrategy>. amoplefttype and amoprighttype are just copies of the\n * operator's oprleft/oprright, ie its declared input data types.\n\nPerhaps it'd be a good idea for opr_sanity.sql to verify that, since\nit'd be an easy thing to mess up in handmade pg_amop entries. But\nat least for the foreseeable future, there's no reason for \\dAo to show\namoplefttype/amoprighttype separately.\n\nI agree that \\dAo ought to be showing amopstrategy; moreover that ought\nto be much higher in the sort key than it is. Also, if we're not going\nto show amoppurpose, then the view probably ought to hide non-search\noperators altogether. It is REALLY misleading to not distinguish search\nand ordering operators.\n \nThe situation is different for pg_amproc: if you look for discrepancies\nyou will find plenty, since in many cases a support function's signature\nhas little to do with what types it is registered under. Perhaps it'd be\nworthwhile for \\dAp to show the functions as regprocedure in addition to\nshowing amproclefttype/amprocrighttype explicitly. In any case, I think\nit's rather misleading for \\dAp to label amproclefttype/amprocrighttype as\n\"Left arg type\" and \"Right arg type\", because for almost everything except\nbtree/hash, that's not what the support function's arguments actually are.\nPerhaps names along the lines of \"Registered left type\" and \"Registered\nright type\" would put readers in a better frame of mind to understand\nthe entries.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 17:34:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "Sergey, Nikita, Alexander, if you can please see this thread and propose\na solution, that'd be very welcome.\n\n\nOn 2020-Jun-06, Tom Lane wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > I'm also wondering whether this is fully correct. Would it be possible for the\n> > argument types of the operator/function to differ from the left arg/right arg\n> > types? (Perhaps binary compatible?)\n> \n> pg_amop.h specifies that\n> \n> * The primary key for this table is <amopfamily, amoplefttype, amoprighttype,\n> * amopstrategy>. amoplefttype and amoprighttype are just copies of the\n> * operator's oprleft/oprright, ie its declared input data types.\n> \n> Perhaps it'd be a good idea for opr_sanity.sql to verify that, since\n> it'd be an easy thing to mess up in handmade pg_amop entries. But\n> at least for the foreseeable future, there's no reason for \\dAo to show\n> amoplefttype/amoprighttype separately.\n> \n> I agree that \\dAo ought to be showing amopstrategy; moreover that ought\n> to be much higher in the sort key than it is. Also, if we're not going\n> to show amoppurpose, then the view probably ought to hide non-search\n> operators altogether. It is REALLY misleading to not distinguish search\n> and ordering operators.\n> \n> The situation is different for pg_amproc: if you look for discrepancies\n> you will find plenty, since in many cases a support function's signature\n> has little to do with what types it is registered under. Perhaps it'd be\n> worthwhile for \\dAp to show the functions as regprocedure in addition to\n> showing amproclefttype/amprocrighttype explicitly. In any case, I think\n> it's rather misleading for \\dAp to label amproclefttype/amprocrighttype as\n> \"Left arg type\" and \"Right arg type\", because for almost everything except\n> btree/hash, that's not what the support function's arguments actually are.\n> Perhaps names along the lines of \"Registered left type\" and \"Registered\n> right type\" would put readers in a better frame of mind to understand\n> the entries.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Jul 2020 17:02:02 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "On Sun, Jun 7, 2020 at 12:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > I'm also wondering whether this is fully correct. Would it be possible for the\n> > argument types of the operator/function to differ from the left arg/right arg\n> > types? (Perhaps binary compatible?)\n>\n> pg_amop.h specifies that\n>\n> * The primary key for this table is <amopfamily, amoplefttype, amoprighttype,\n> * amopstrategy>. amoplefttype and amoprighttype are just copies of the\n> * operator's oprleft/oprright, ie its declared input data types.\n>\n> Perhaps it'd be a good idea for opr_sanity.sql to verify that, since\n> it'd be an easy thing to mess up in handmade pg_amop entries. But\n> at least for the foreseeable future, there's no reason for \\dAo to show\n> amoplefttype/amoprighttype separately.\n\n+1 for checking consistency of amoplefttype/amoprighttype in opr_sanity.sql\n\n> I agree that \\dAo ought to be showing amopstrategy;\n\nI agree that the strategy and purpose of an operator is valuable\ninformation. And we probably shouldn't hide it in \\dAo. If we do so,\nthen \\dAo and \\dAo+ differ by only \"sort opfamily\" column. Is it\nworth keeping the \\dAo+ command for single-column difference?\n\n> moreover that ought\n> to be much higher in the sort key than it is.\n\nDo you mean we should sort by strategy number and only then by\narg types? Current output shows operators grouped by opclasses,\nafter that cross-opclass operators are shown. This order seems to me\nmore worthwhile than seeing all the variations of the same strategy\ntogether.\n\n> Also, if we're not going\n> to show amoppurpose, then the view probably ought to hide non-search\n> operators altogether. It is REALLY misleading to not distinguish search\n> and ordering operators.\n\n+1\n\n> The situation is different for pg_amproc: if you look for discrepancies\n> you will find plenty, since in many cases a support function's signature\n> has little to do with what types it is registered under. Perhaps it'd be\n> worthwhile for \\dAp to show the functions as regprocedure in addition to\n> showing amproclefttype/amprocrighttype explicitly. In any case, I think\n> it's rather misleading for \\dAp to label amproclefttype/amprocrighttype as\n> \"Left arg type\" and \"Right arg type\", because for almost everything except\n> btree/hash, that's not what the support function's arguments actually are.\n> Perhaps names along the lines of \"Registered left type\" and \"Registered\n> right type\" would put readers in a better frame of mind to understand\n> the entries.\n\n+1 for rename \"Left arg type\"/\"Right arg type\" to \"Registered left\ntype\"/\"Registered right type\".\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 8 Jul 2020 01:09:50 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "On 7/7/20 6:09 PM, Alexander Korotkov wrote:\n> On Sun, Jun 7, 2020 at 12:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> I'm also wondering whether this is fully correct. Would it be possible for the\n>>> argument types of the operator/function to differ from the left arg/right arg\n>>> types? (Perhaps binary compatible?)\n>>\n>> pg_amop.h specifies that\n>>\n>> * The primary key for this table is <amopfamily, amoplefttype, amoprighttype,\n>> * amopstrategy>. amoplefttype and amoprighttype are just copies of the\n>> * operator's oprleft/oprright, ie its declared input data types.\n>>\n>> Perhaps it'd be a good idea for opr_sanity.sql to verify that, since\n>> it'd be an easy thing to mess up in handmade pg_amop entries. But\n>> at least for the foreseeable future, there's no reason for \\dAo to show\n>> amoplefttype/amoprighttype separately.\n> \n> +1 for checking consistency of amoplefttype/amoprighttype in opr_sanity.sql\n> \n>> I agree that \\dAo ought to be showing amopstrategy;\n> \n> I agree that the strategy and purpose of an operator is valuable\n> information. And we probably shouldn't hide it in \\dAo. If we do so,\n> then \\dAo and \\dAo+ differ by only \"sort opfamily\" column. Is it\n> worth keeping the \\dAo+ command for single-column difference?\n> \n>> moreover that ought\n>> to be much higher in the sort key than it is.\n> \n> Do you mean we should sort by strategy number and only then by\n> arg types? Current output shows operators grouped by opclasses,\n> after that cross-opclass operators are shown. This order seems to me\n> more worthwhile than seeing all the variations of the same strategy\n> together.\n> \n>> Also, if we're not going\n>> to show amoppurpose, then the view probably ought to hide non-search\n>> operators altogether. It is REALLY misleading to not distinguish search\n>> and ordering operators.\n> \n> +1\n> \n>> The situation is different for pg_amproc: if you look for discrepancies\n>> you will find plenty, since in many cases a support function's signature\n>> has little to do with what types it is registered under. Perhaps it'd be\n>> worthwhile for \\dAp to show the functions as regprocedure in addition to\n>> showing amproclefttype/amprocrighttype explicitly. In any case, I think\n>> it's rather misleading for \\dAp to label amproclefttype/amprocrighttype as\n>> \"Left arg type\" and \"Right arg type\", because for almost everything except\n>> btree/hash, that's not what the support function's arguments actually are.\n>> Perhaps names along the lines of \"Registered left type\" and \"Registered\n>> right type\" would put readers in a better frame of mind to understand\n>> the entries.\n> \n> +1 for rename \"Left arg type\"/\"Right arg type\" to \"Registered left\n> type\"/\"Registered right type\".\n\nFrom the RMT perspective, if there is an agreed upon approach (which it\nsounds like from the above) can someone please commit to working on\nresolving this open item?\n\nThanks!\n\nJonathan",
"msg_date": "Thu, 9 Jul 2020 15:03:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "On Thu, Jul 9, 2020 at 10:03 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> From the RMT perspective, if there is an agreed upon approach (which it\n> sounds like from the above) can someone please commit to working on\n> resolving this open item?\n\nI hardly can extract an approach from this thread, because for me the\nwhole issue is about details :)\n\nBut I think we can come to an agreement shortly. And yes, I commit to\nresolve this.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 10 Jul 2020 02:24:19 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "On Fri, Jul 10, 2020 at 2:24 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Jul 9, 2020 at 10:03 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > From the RMT perspective, if there is an agreed upon approach (which it\n> > sounds like from the above) can someone please commit to working on\n> > resolving this open item?\n>\n> I hardly can extract an approach from this thread, because for me the\n> whole issue is about details :)\n>\n> But I think we can come to an agreement shortly. And yes, I commit to\n> resolve this.\n\nThe proposed patch is attached. This patch is fixes two points:\n * Adds strategy number and purpose to output of \\dAo\n * Renames \"Left/right arg type\" columns of \\dAp to \"Registered left/right type\"\n\nI'm not yet convinced we should change the sort key for \\dAo.\n\nAny thoughts?\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sat, 11 Jul 2020 14:23:33 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> The proposed patch is attached. This patch is fixes two points:\n> * Adds strategy number and purpose to output of \\dAo\n> * Renames \"Left/right arg type\" columns of \\dAp to \"Registered left/right type\"\n\nI think that \\dAp should additionally be changed to print the\nfunction via \"oid::regprocedure\", not just proname. A possible\ncompromise, if you think that's too wordy, is to do it that\nway for \"\\dAp+\" while printing plain proname for \"\\dAp\".\n\nBTW, isn't this:\n\n \" format ('%%s (%%s, %%s)',\\n\"\n \" CASE\\n\"\n \" WHEN pg_catalog.pg_operator_is_visible(op.oid) \\n\"\n \" THEN op.oprname::pg_catalog.text \\n\"\n \" ELSE o.amopopr::pg_catalog.regoper::pg_catalog.text \\n\"\n \" END,\\n\"\n \" pg_catalog.format_type(o.amoplefttype, NULL),\\n\"\n \" pg_catalog.format_type(o.amoprighttype, NULL)\\n\"\n \" ) AS \\\"%s\\\"\\n,\"\n\njust an extremely painful way to duplicate the results of regoperator?\n(You could likely remove the joins to pg_proc and pg_operator altogether\nif you relied on regprocedure and regoperator casts.)\n\n> I'm not yet convinced we should change the sort key for \\dAo.\n\nAfter playing with this more, I'm less worried about that than\nI was. I think I was concerned that the operator name would\nsort ahead of amopstrategy, but now I see that the op name isn't\npart of the sort key at all.\n\nBTW, these queries seem inadequately schema-qualified, notably\nthe format() calls.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jul 2020 15:59:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "On Sat, Jul 11, 2020 at 10:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > The proposed patch is attached. This patch is fixes two points:\n> > * Adds strategy number and purpose to output of \\dAo\n> > * Renames \"Left/right arg type\" columns of \\dAp to \"Registered left/right type\"\n>\n> I think that \\dAp should additionally be changed to print the\n> function via \"oid::regprocedure\", not just proname. A possible\n> compromise, if you think that's too wordy, is to do it that\n> way for \"\\dAp+\" while printing plain proname for \"\\dAp\".\n\nGood compromise. Done as you proposed.\n\n> BTW, isn't this:\n>\n> \" format ('%%s (%%s, %%s)',\\n\"\n> \" CASE\\n\"\n> \" WHEN pg_catalog.pg_operator_is_visible(op.oid) \\n\"\n> \" THEN op.oprname::pg_catalog.text \\n\"\n> \" ELSE o.amopopr::pg_catalog.regoper::pg_catalog.text \\n\"\n> \" END,\\n\"\n> \" pg_catalog.format_type(o.amoplefttype, NULL),\\n\"\n> \" pg_catalog.format_type(o.amoprighttype, NULL)\\n\"\n> \" ) AS \\\"%s\\\"\\n,\"\n>\n> just an extremely painful way to duplicate the results of regoperator?\n> (You could likely remove the joins to pg_proc and pg_operator altogether\n> if you relied on regprocedure and regoperator casts.)\n\nYeah, this subquery is totally dumb. Replaced with cast to regoperator.\n\n> > I'm not yet convinced we should change the sort key for \\dAo.\n>\n> After playing with this more, I'm less worried about that than\n> I was. I think I was concerned that the operator name would\n> sort ahead of amopstrategy, but now I see that the op name isn't\n> part of the sort key at all.\n\nOk.\n\n> BTW, these queries seem inadequately schema-qualified, notably\n> the format() calls.\n\nThank you for pointing. I've added schema-qualification to pg_catalog\nfunctions and tables.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 13 Jul 2020 15:44:45 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> Good compromise. Done as you proposed.\n\nI'm OK with this version.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jul 2020 10:37:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "On 7/13/20 10:37 AM, Tom Lane wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n>> Good compromise. Done as you proposed.\n> \n> I'm OK with this version.\n\nI saw this was committed and the item was adjusted on the Open Items list.\n\nThank you!\n\nJonathan",
"msg_date": "Mon, 13 Jul 2020 12:54:50 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
},
{
"msg_contents": "On Mon, Jul 13, 2020 at 7:54 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 7/13/20 10:37 AM, Tom Lane wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> >> Good compromise. Done as you proposed.\n> >\n> > I'm OK with this version.\n>\n> I saw this was committed and the item was adjusted on the Open Items list.\n\nThank you!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 13 Jul 2020 19:57:57 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: output columns of \\dAo and \\dAp"
}
] |
[
{
"msg_contents": "The Debian Sid buildfarm members have dozens of failures over the last day,\nbecause the latest Perl packages caused \"perl -V:useshrplib\" to report false.\nOn thorntail, for some reason, \"perl5.30-sparc64-linux-gnu -V:useshrplib\" does\nreturn true. I've added PERL=perl5.30-sparc64-linux-gnu to thorntail.\n\n\n",
"msg_date": "Sat, 6 Jun 2020 15:20:17 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Debian Sid broke Perl"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> The Debian Sid buildfarm members have dozens of failures over the last day,\n> because the latest Perl packages caused \"perl -V:useshrplib\" to report false.\n\nHas anyone filed a bug report?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 18:38:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "On Sat, Jun 06, 2020 at 06:38:47PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > The Debian Sid buildfarm members have dozens of failures over the last day,\n> > because the latest Perl packages caused \"perl -V:useshrplib\" to report false.\n> \n> Has anyone filed a bug report?\n\nNot me.\n\n\n",
"msg_date": "Sat, 6 Jun 2020 15:53:11 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Jun 06, 2020 at 06:38:47PM -0400, Tom Lane wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> The Debian Sid buildfarm members have dozens of failures over the last day,\n>>> because the latest Perl packages caused \"perl -V:useshrplib\" to report false.\n\n>> Has anyone filed a bug report?\n\n> Not me.\n\nA bit of searching turned up this:\n\nhttps://bugs.debian.org/cgi-bin/bugreport.cgi?bug=798626\n\nshowing that the change was intentional. Somebody should push back on\nthat, but not being a Debian person it probably shouldn't be me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 19:00:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "I wrote:\n> A bit of searching turned up this:\n> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=798626\n\nBTW, as far as I can tell from the underlying discussion at\nhttps://bugs.debian.org/cgi-bin/bugreport.cgi?bug=962138\nthere was no actual change in the existence of the shared\nlibrary, but what is now happening is that we are getting\na result reflecting the fact that /usr/bin/perl itself is\nstatically linked.\n\nI wonder whether we could just drop the configure-time\ntest for useshrplib. The worst-case scenario is that a user\ngrinds through a build and eventually gets an obscure link error,\nbut that's probably quite uncommon these days.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 19:11:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "On Sat, Jun 06, 2020 at 07:11:51PM -0400, Tom Lane wrote:\n> I wrote:\n> > A bit of searching turned up this:\n> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=798626\n> \n> BTW, as far as I can tell from the underlying discussion at\n> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=962138\n> there was no actual change in the existence of the shared\n> library, but what is now happening is that we are getting\n> a result reflecting the fact that /usr/bin/perl itself is\n> statically linked.\n\nInteresting.\n\n> I wonder whether we could just drop the configure-time\n> test for useshrplib. The worst-case scenario is that a user\n> grinds through a build and eventually gets an obscure link error,\n> but that's probably quite uncommon these days.\n\nLosing that would not hurt much. This solution relies on all other Perl\nconfigure tests getting the same answer from /usr/bin/perl that they would get\nfrom /usr/bin/perl*gnu. thorntail currently does behave that way:\n\n$ diff -u <(perl -MConfig -e 'print Config::config_sh') <(perl5.30-sparc64-linux-gnu -MConfig -e 'print Config::config_sh')\n--- /dev/fd/63 2020-06-07 02:50:49.368000000 +0300\n+++ /dev/fd/62 2020-06-07 02:50:49.368000000 +0300\n@@ -103,14 +103,15 @@\n config_arg38='-Doptimize=-O2'\n config_arg39='-dEs'\n config_arg4='-Dcc=sparc64-linux-gnu-gcc'\n-config_arg40='-Uuseshrplib'\n+config_arg40='-Duseshrplib'\n+config_arg41='-Dlibperl=libperl.so.5.30.3'\n config_arg5='-Dcpp=sparc64-linux-gnu-cpp'\n config_arg6='-Dld=sparc64-linux-gnu-gcc'\n-config_arg7='-Dccflags=-DDEBIAN -DAPPLLIB_EXP=\"/etc/perl:/usr/lib/sparc64-linux-gnu/perl-base\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -fdebug-prefix-map=/build/perl-NCIX23/perl-5.30.3=. -fstack-protector-strong -Wformat -Werror=format-security'\n+config_arg7='-Dccflags=-DDEBIAN -DAPPLLIB_EXP=\"/etc/perl\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -fdebug-prefix-map=/build/perl-NCIX23/perl-5.30.3=. -fstack-protector-strong -Wformat -Werror=format-security'\n config_arg8='-Dldflags= -Wl,-z,relro'\n config_arg9='-Dlddlflags=-shared -Wl,-z,relro'\n-config_argc='40'\n-config_args='-Dmksymlinks -Dusethreads -Duselargefiles -Dcc=sparc64-linux-gnu-gcc -Dcpp=sparc64-linux-gnu-cpp -Dld=sparc64-linux-gnu-gcc -Dccflags=-DDEBIAN -DAPPLLIB_EXP=\"/etc/perl:/usr/lib/sparc64-linux-gnu/perl-base\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -fdebug-prefix-map=/build/perl-NCIX23/perl-5.30.3=. -fstack-protector-strong -Wformat -Werror=format-security -Dldflags= -Wl,-z,relro -Dlddlflags=-shared -Wl,-z,relro -Dcccdlflags=-fPIC -Darchname=sparc64-linux-gnu -Dprefix=/usr -Dprivlib=/usr/share/perl/5.30 -Darchlib=/usr/lib/sparc64-linux-gnu/perl/5.30 -Dvendorprefix=/usr -Dvendorlib=/usr/share/perl5 -Dvendorarch=/usr/lib/sparc64-linux-gnu/perl5/5.30 -Dsiteprefix=/usr/local -Dsitelib=/usr/local/share/perl/5.30.3 -Dsitearch=/usr/local/lib/sparc64-linux-gnu/perl/5.30.3 -Dman1dir=/usr/share/man/man1 -Dman3dir=/usr/share/man/man3 -Dsiteman1dir=/usr/local/man/man1 -Dsiteman3dir=/usr/local/man/man3 -Duse64bitint -Dman1ext=1 -Dman3ext=3perl -Dpager=/usr/bin/sensible-pager -Uafs -Ud_csh -Ud_ualarm -Uusesfio -Uusenm -Ui_libutil -Ui_xlocale -Uversiononly -DDEBUGGING=-g -Doptimize=-O2 -dEs -Uuseshrplib'\n+config_argc='41'\n+config_args='-Dmksymlinks -Dusethreads -Duselargefiles -Dcc=sparc64-linux-gnu-gcc -Dcpp=sparc64-linux-gnu-cpp -Dld=sparc64-linux-gnu-gcc -Dccflags=-DDEBIAN -DAPPLLIB_EXP=\"/etc/perl\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -fdebug-prefix-map=/build/perl-NCIX23/perl-5.30.3=. -fstack-protector-strong -Wformat -Werror=format-security -Dldflags= -Wl,-z,relro -Dlddlflags=-shared -Wl,-z,relro -Dcccdlflags=-fPIC -Darchname=sparc64-linux-gnu -Dprefix=/usr -Dprivlib=/usr/share/perl/5.30 -Darchlib=/usr/lib/sparc64-linux-gnu/perl/5.30 -Dvendorprefix=/usr -Dvendorlib=/usr/share/perl5 -Dvendorarch=/usr/lib/sparc64-linux-gnu/perl5/5.30 -Dsiteprefix=/usr/local -Dsitelib=/usr/local/share/perl/5.30.3 -Dsitearch=/usr/local/lib/sparc64-linux-gnu/perl/5.30.3 -Dman1dir=/usr/share/man/man1 -Dman3dir=/usr/share/man/man3 -Dsiteman1dir=/usr/local/man/man1 -Dsiteman3dir=/usr/local/man/man3 -Duse64bitint -Dman1ext=1 -Dman3ext=3perl -Dpager=/usr/bin/sensible-pager -Uafs -Ud_csh -Ud_ualarm -Uusesfio -Uusenm -Ui_libutil -Ui_xlocale -Uversiononly -DDEBUGGING=-g -Doptimize=-O2 -dEs -Duseshrplib -Dlibperl=libperl.so.5.30.3'\n contains='grep'\n cp='cp'\n cpio=''\n@@ -919,7 +920,7 @@\n lib_ext='.a'\n libc='libc-2.31.so'\n libdb_needs_pthread='N'\n-libperl='libperl.a'\n+libperl='libperl.so.5.30'\n libpth='/usr/local/lib /usr/include/sparc64-linux-gnu /usr/lib /lib/sparc64-linux-gnu /lib/../lib /usr/lib/sparc64-linux-gnu /usr/lib/../lib /lib'\n libs='-lgdbm -lgdbm_compat -ldb -ldl -lm -lpthread -lc -lcrypt'\n libsdirs=' /usr/lib/sparc64-linux-gnu'\n@@ -1203,7 +1204,7 @@\n usequadmath='undef'\n usereentrant='undef'\n userelocatableinc='undef'\n-useshrplib='false'\n+useshrplib='true'\n usesitecustomize='undef'\n usesocks='undef'\n usethreads='define'\n\n\n",
"msg_date": "Sat, 6 Jun 2020 16:53:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Jun 06, 2020 at 07:11:51PM -0400, Tom Lane wrote:\n>> I wonder whether we could just drop the configure-time\n>> test for useshrplib.\n\n> Losing that would not hurt much. This solution relies on all other Perl\n> configure tests getting the same answer from /usr/bin/perl that they would get\n> from /usr/bin/perl*gnu.\n\nAye, there's the rub.\n\n> thorntail currently does behave that way:\n\nDoes not, you mean? This part looks pretty fatal to the idea:\n\n> @@ -919,7 +920,7 @@\n> lib_ext='.a'\n> libc='libc-2.31.so'\n> libdb_needs_pthread='N'\n> -libperl='libperl.a'\n> +libperl='libperl.so.5.30'\n> libpth='/usr/local/lib /usr/include/sparc64-linux-gnu /usr/lib /lib/sparc64-linux-gnu /lib/../lib /usr/lib/sparc64-linux-gnu /usr/lib/../lib /lib'\n> libs='-lgdbm -lgdbm_compat -ldb -ldl -lm -lpthread -lc -lcrypt'\n> libsdirs=' /usr/lib/sparc64-linux-gnu'\n\nWe can't accept linking plperl to the static libperl.a --- even if it\nmanages to work, which it won't on some hardware, that would result in\nlibperl becoming embedded in plperl.so. That breaks every rule of good\ndistribution management.\n\nI fear we shall have to push back against this as a breaking change.\nWe can't realistically be expected to look for some non-default version\nof perl to get our build settings from.\n\nHowever ... if I'm reading perl.m4 correctly, what actually matters\nhere is what we get from\n\n$PERL -MExtUtils::Embed -e ldopts\n\nCould you double-check what that produces in each case?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 20:38:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "On Sat, Jun 06, 2020 at 08:38:13PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Sat, Jun 06, 2020 at 07:11:51PM -0400, Tom Lane wrote:\n> >> I wonder whether we could just drop the configure-time\n> >> test for useshrplib.\n> \n> > Losing that would not hurt much. This solution relies on all other Perl\n> > configure tests getting the same answer from /usr/bin/perl that they would get\n> > from /usr/bin/perl*gnu.\n> \n> Aye, there's the rub.\n> \n> > thorntail currently does behave that way:\n> \n> Does not, you mean? This part looks pretty fatal to the idea:\n\nI meant that PostgreSQL's ./configure must get the same answers, and it does\n(should have posted this instead of what I did post):\n\nchecking for PERL... perlwrap\nconfigure: using perl 5.30.3\nchecking for Perl archlibexp... /usr/lib/sparc64-linux-gnu/perl/5.30\nchecking for Perl privlibexp... /usr/share/perl/5.30\nchecking for Perl useshrplib... true\nchecking for CFLAGS recommended by Perl... -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\nchecking for CFLAGS to compile embedded Perl... -DDEBIAN\nchecking for flags to link embedded Perl... -fstack-protector-strong -L/usr/local/lib -L/usr/lib/sparc64-linux-gnu/perl/5.30/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n\nchecking for PERL... perl5.30-sparc64-linux-gnu\nconfigure: using perl 5.30.3\nchecking for Perl archlibexp... /usr/lib/sparc64-linux-gnu/perl/5.30\nchecking for Perl privlibexp... /usr/share/perl/5.30\nchecking for Perl useshrplib... true\nchecking for CFLAGS recommended by Perl... -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\nchecking for CFLAGS to compile embedded Perl... -DDEBIAN \nchecking for flags to link embedded Perl... -fstack-protector-strong -L/usr/local/lib -L/usr/lib/sparc64-linux-gnu/perl/5.30/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n\n\n\"perlwrap\" is a script that fakes useshrplib:\n#! /bin/sh\nif [ \"$*\" = '-MConfig -e print $Config{useshrplib}' ]\nthen echo -n true\nelse exec perl \"$@\"\nfi\n\n\n",
"msg_date": "Sat, 6 Jun 2020 17:46:01 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> I meant that PostgreSQL's ./configure must get the same answers, and it does\n> (should have posted this instead of what I did post):\n\nAh, that looks good. I suppose that we can generally expect that the\nldflags output will look like \"-L/some/path -lperl ...\", and whether\nor not the libperl in that directory is .so or .a is not going to affect\nthings at this level. Furthermore, given that this output is specifically\ndefined to be flags to be used to *embed* libperl, it's the distro's own\nfault if they end up with libperl statically linked into other packages;\nthey should not be putting a .a-style library there.\n\nSo I'm content to fix this by removing the check for useshrplib.\nAny objections?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 21:02:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "I wrote:\n> So I'm content to fix this by removing the check for useshrplib.\n\nHaving said that ... it does not appear to me that the Debian perl\nmaintainer foresaw all the consequences of this change, so I went\nahead and filed some push-back at\n\nhttps://bugs.debian.org/cgi-bin/bugreport.cgi?bug=798626\n\nI'll wait to see the reply before changing anything.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jun 2020 21:35:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "I wrote:\n> Having said that ... it does not appear to me that the Debian perl\n> maintainer foresaw all the consequences of this change, so I went\n> ahead and filed some push-back at\n> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=798626\n> I'll wait to see the reply before changing anything.\n\nThe maintainer says he'll revert the change, so I suppose the\nbuildfarm will go back to normal without extra effort on our part.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Jun 2020 11:06:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Debian Sid broke Perl"
},
{
"msg_contents": "On Sun, Jun 07, 2020 at 11:06:27AM -0400, Tom Lane wrote:\n> The maintainer says he'll revert the change, so I suppose the\n> buildfarm will go back to normal without extra effort on our part.\n\nThe buildfarm has moved back to a green state as of the moment I am\nwriting this email (see serinus for example).\n--\nMichael",
"msg_date": "Mon, 8 Jun 2020 16:00:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Debian Sid broke Perl"
}
] |
[
{
"msg_contents": "I experimented with running \"make check\" on ARM64 under a reasonably\nbleeding-edge valgrind (3.16.0). One thing I ran into is that\nregress.c's test_atomic_ops fails; valgrind shows the stack trace\n\n fun:__aarch64_cas8_acq_rel\n fun:pg_atomic_compare_exchange_u64_impl\n fun:pg_atomic_exchange_u64_impl\n fun:pg_atomic_write_u64_impl\n fun:pg_atomic_init_u64_impl\n fun:pg_atomic_init_u64\n fun:test_atomic_uint64\n fun:test_atomic_ops\n fun:ExecInterpExpr\n\nNow, this is basically the same thing as is already memorialized in\nsrc/tools/valgrind.supp:\n\n# Atomic writes to 64bit atomic vars uses compare/exchange to\n# guarantee atomic writes of 64bit variables. pg_atomic_write is used\n# during initialization of the atomic variable; that leads to an\n# initial read of the old, undefined, memory value. But that's just to\n# make sure the swap works correctly.\n{\n\tuninitialized_atomic_init_u64\n\tMemcheck:Cond\n\tfun:pg_atomic_exchange_u64_impl\n\tfun:pg_atomic_write_u64_impl\n\tfun:pg_atomic_init_u64_impl\n}\n\nso my first thought was that we just needed an architecture-specific\nvariant of that. But on thinking more about this, it seems like\ngeneric.h's version of pg_atomic_init_u64_impl is just fundamentally\nmisguided. Why isn't it simply assigning the value with an ordinary\nunlocked write? By definition, we had better not be using this function\nin any circumstance where there might be conflicting accesses, so I don't\nsee why we should need to tolerate a valgrind exception here. Moreover,\nif a simple assignment *isn't* good enough, then surely the spinlock\nversion in atomics.c is 100% broken.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Jun 2020 00:23:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-07 00:23:35 -0400, Tom Lane wrote:\n> I experimented with running \"make check\" on ARM64 under a reasonably\n> bleeding-edge valgrind (3.16.0). One thing I ran into is that\n> regress.c's test_atomic_ops fails; valgrind shows the stack trace\n> \n> fun:__aarch64_cas8_acq_rel\n> fun:pg_atomic_compare_exchange_u64_impl\n> fun:pg_atomic_exchange_u64_impl\n> fun:pg_atomic_write_u64_impl\n> fun:pg_atomic_init_u64_impl\n> fun:pg_atomic_init_u64\n> fun:test_atomic_uint64\n> fun:test_atomic_ops\n> fun:ExecInterpExpr\n> \n> Now, this is basically the same thing as is already memorialized in\n> src/tools/valgrind.supp:\n> \n> # Atomic writes to 64bit atomic vars uses compare/exchange to\n> # guarantee atomic writes of 64bit variables. pg_atomic_write is used\n> # during initialization of the atomic variable; that leads to an\n> # initial read of the old, undefined, memory value. But that's just to\n> # make sure the swap works correctly.\n> {\n> \tuninitialized_atomic_init_u64\n> \tMemcheck:Cond\n> \tfun:pg_atomic_exchange_u64_impl\n> \tfun:pg_atomic_write_u64_impl\n> \tfun:pg_atomic_init_u64_impl\n> }\n> \n> so my first thought was that we just needed an architecture-specific\n> variant of that. But on thinking more about this, it seems like\n> generic.h's version of pg_atomic_init_u64_impl is just fundamentally\n> misguided. Why isn't it simply assigning the value with an ordinary\n> unlocked write? By definition, we had better not be using this function\n> in any circumstance where there might be conflicting accesses, so I don't\n> see why we should need to tolerate a valgrind exception here. Moreover,\n> if a simple assignment *isn't* good enough, then surely the spinlock\n> version in atomics.c is 100% broken.\n\nYea, it could just do that. It seemed slightly easier/clearer, back when\nI wrote it, to just use pg_atomic_write* for the initialization, but\nthis seems enough of a reason to stop doing so. Will change it in all\nbranches, unless somebody sees a reason to not do so?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Jun 2020 15:02:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-07 00:23:35 -0400, Tom Lane wrote:\n>> so my first thought was that we just needed an architecture-specific\n>> variant of that. But on thinking more about this, it seems like\n>> generic.h's version of pg_atomic_init_u64_impl is just fundamentally\n>> misguided. Why isn't it simply assigning the value with an ordinary\n>> unlocked write? By definition, we had better not be using this function\n>> in any circumstance where there might be conflicting accesses, so I don't\n>> see why we should need to tolerate a valgrind exception here. Moreover,\n>> if a simple assignment *isn't* good enough, then surely the spinlock\n>> version in atomics.c is 100% broken.\n\n> Yea, it could just do that. It seemed slightly easier/clearer, back when\n> I wrote it, to just use pg_atomic_write* for the initialization, but\n> this seems enough of a reason to stop doing so. Will change it in all\n> branches, unless somebody sees a reason to not do so?\n\nWorks for me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jun 2020 18:21:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-08 18:21:06 -0400, Tom Lane wrote:\n> > Yea, it could just do that. It seemed slightly easier/clearer, back when\n> > I wrote it, to just use pg_atomic_write* for the initialization, but\n> > this seems enough of a reason to stop doing so. Will change it in all\n> > branches, unless somebody sees a reason to not do so?\n>\n> Works for me.\n\nAnd done.\n\n- Andres\n\n\n",
"msg_date": "Mon, 8 Jun 2020 20:25:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> And done.\n\nLGTM, thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jun 2020 23:31:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "On Sun, Jun 07, 2020 at 12:23:35AM -0400, Tom Lane wrote:\n> I experimented with running \"make check\" on ARM64 under a reasonably\n> bleeding-edge valgrind (3.16.0). One thing I ran into is that\n> regress.c's test_atomic_ops fails; valgrind shows the stack trace\n> \n> fun:__aarch64_cas8_acq_rel\n> fun:pg_atomic_compare_exchange_u64_impl\n> fun:pg_atomic_exchange_u64_impl\n> fun:pg_atomic_write_u64_impl\n> fun:pg_atomic_init_u64_impl\n> fun:pg_atomic_init_u64\n> fun:test_atomic_uint64\n> fun:test_atomic_ops\n> fun:ExecInterpExpr\n> \n> Now, this is basically the same thing as is already memorialized in\n> src/tools/valgrind.supp:\n> \n> # Atomic writes to 64bit atomic vars uses compare/exchange to\n> # guarantee atomic writes of 64bit variables. pg_atomic_write is used\n> # during initialization of the atomic variable; that leads to an\n> # initial read of the old, undefined, memory value. But that's just to\n> # make sure the swap works correctly.\n> {\n> \tuninitialized_atomic_init_u64\n> \tMemcheck:Cond\n> \tfun:pg_atomic_exchange_u64_impl\n> \tfun:pg_atomic_write_u64_impl\n> \tfun:pg_atomic_init_u64_impl\n> }\n> \n> so my first thought was that we just needed an architecture-specific\n> variant of that. But on thinking more about this, it seems like\n> generic.h's version of pg_atomic_init_u64_impl is just fundamentally\n> misguided. Why isn't it simply assigning the value with an ordinary\n> unlocked write? By definition, we had better not be using this function\n> in any circumstance where there might be conflicting accesses\n\nDoes something guarantee the write will be globally-visible by the time the\nfirst concurrent accessor shows up? (If not, one could (a) do an unlocked\nptr->value=0, then the atomic write, or (b) revert and improve the\nsuppression.) I don't doubt it's fine for the ways PostgreSQL uses atomics\ntoday, which generally initialize an atomic before the concurrent-accessor\nprocesses even exist.\n\n> , so I don't\n> see why we should need to tolerate a valgrind exception here. Moreover,\n> if a simple assignment *isn't* good enough, then surely the spinlock\n> version in atomics.c is 100% broken.\n\nAre you saying it would imply a bug in atomics.c pg_atomic_init_u64_impl(),\npg_atomic_compare_exchange_u64_impl(), or pg_atomic_fetch_add_u64_impl()? Can\nyou explain that more? If you were referring to unlocked \"*(lock) = 0\", that\nis different since it's safe to have a delay in propagation of the change from\nlocked state to unlocked state.\n\n\n",
"msg_date": "Sun, 14 Jun 2020 18:55:27 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sun, Jun 07, 2020 at 12:23:35AM -0400, Tom Lane wrote:\n>> ... But on thinking more about this, it seems like\n>> generic.h's version of pg_atomic_init_u64_impl is just fundamentally\n>> misguided. Why isn't it simply assigning the value with an ordinary\n>> unlocked write? By definition, we had better not be using this function\n>> in any circumstance where there might be conflicting accesses\n\n> Does something guarantee the write will be globally-visible by the time the\n> first concurrent accessor shows up? (If not, one could (a) do an unlocked\n> ptr->value=0, then the atomic write, or (b) revert and improve the\n> suppression.) I don't doubt it's fine for the ways PostgreSQL uses atomics\n> today, which generally initialize an atomic before the concurrent-accessor\n> processes even exist.\n\nPerhaps it'd be worth putting a memory barrier at the end of the _init\nfunction(s)? As you say, this is hypothetical right now, but that'd be\na cheap improvement.\n\n>> if a simple assignment *isn't* good enough, then surely the spinlock\n>> version in atomics.c is 100% broken.\n\n> Are you saying it would imply a bug in atomics.c pg_atomic_init_u64_impl(),\n> pg_atomic_compare_exchange_u64_impl(), or pg_atomic_fetch_add_u64_impl()? Can\n> you explain that more?\n\nMy point was that doing SpinLockInit while somebody else is already trying\nto acquire or release the spinlock is not going to work out well. So\nthere has to be a really clear boundary between \"initialization\" mode\nand \"use\" mode; which is more or less the same point you make above.\n\nIn practice, if that line is so fine that we need a memory sync operation\nto enforce it, things are probably broken anyhow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 14 Jun 2020 22:30:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-14 18:55:27 -0700, Noah Misch wrote:\n> Does something guarantee the write will be globally-visible by the time the\n> first concurrent accessor shows up?\n\nThe function comments say:\n\n *\n * Has to be done before any concurrent usage..\n *\n * No barrier semantics.\n\n\n> (If not, one could (a) do an unlocked ptr->value=0, then the atomic\n> write, or (b) revert and improve the suppression.) I don't doubt it's\n> fine for the ways PostgreSQL uses atomics today, which generally\n> initialize an atomic before the concurrent-accessor processes even\n> exist.\n\nI think it's unlikely that there are cases where you could safely\ninitialize the atomic without needing some form of synchronization\nbefore it can be used. If a barrier were needed, what'd guarantee the\nconcurrent access happened after the initialization in the first place?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 14 Jun 2020 21:16:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-14 22:30:25 -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Sun, Jun 07, 2020 at 12:23:35AM -0400, Tom Lane wrote:\n> >> ... But on thinking more about this, it seems like\n> >> generic.h's version of pg_atomic_init_u64_impl is just fundamentally\n> >> misguided. Why isn't it simply assigning the value with an ordinary\n> >> unlocked write? By definition, we had better not be using this function\n> >> in any circumstance where there might be conflicting accesses\n> \n> > Does something guarantee the write will be globally-visible by the time the\n> > first concurrent accessor shows up? (If not, one could (a) do an unlocked\n> > ptr->value=0, then the atomic write, or (b) revert and improve the\n> > suppression.) I don't doubt it's fine for the ways PostgreSQL uses atomics\n> > today, which generally initialize an atomic before the concurrent-accessor\n> > processes even exist.\n> \n> Perhaps it'd be worth putting a memory barrier at the end of the _init\n> function(s)? As you say, this is hypothetical right now, but that'd be\n> a cheap improvement.\n\nI don't think it'd be that cheap for some cases. There's an atomic for\nevery shared buffer, making their initialization full memory barriers\nwould likely be noticable for larger shared_buffers values.\n\nAs you say:\n\n> In practice, if that line is so fine that we need a memory sync operation\n> to enforce it, things are probably broken anyhow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 14 Jun 2020 21:19:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-14 22:30:25 -0400, Tom Lane wrote:\n>> Perhaps it'd be worth putting a memory barrier at the end of the _init\n>> function(s)? As you say, this is hypothetical right now, but that'd be\n>> a cheap improvement.\n\n> I don't think it'd be that cheap for some cases. There's an atomic for\n> every shared buffer, making their initialization full memory barriers\n> would likely be noticable for larger shared_buffers values.\n\nFair point --- if we did need to do something to make this safer, doing it\nat the level of individual atomic values would be the wrong thing anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jun 2020 00:26:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "On Sun, Jun 14, 2020 at 09:16:20PM -0700, Andres Freund wrote:\n> On 2020-06-14 18:55:27 -0700, Noah Misch wrote:\n> > Does something guarantee the write will be globally-visible by the time the\n> > first concurrent accessor shows up?\n> \n> The function comments say:\n> \n> *\n> * Has to be done before any concurrent usage..\n> *\n> * No barrier semantics.\n> \n> \n> > (If not, one could (a) do an unlocked ptr->value=0, then the atomic\n> > write, or (b) revert and improve the suppression.) I don't doubt it's\n> > fine for the ways PostgreSQL uses atomics today, which generally\n> > initialize an atomic before the concurrent-accessor processes even\n> > exist.\n> \n> I think it's unlikely that there are cases where you could safely\n> initialize the atomic without needing some form of synchronization\n> before it can be used. If a barrier were needed, what'd guarantee the\n> concurrent access happened after the initialization in the first place?\n\nSuppose the initializing process does:\n\n pg_atomic_init_u64(&somestruct->atomic, 123);\n somestruct->atomic_ready = true;\n\nIn released versions, any process observing atomic_ready==true will observe\nthe results of the pg_atomic_init_u64(). After the commit from this thread,\nthat's no longer assured. Having said that, I agree with a special case of\nTom's assertion about spinlocks, namely that this has same problem:\n\n /* somestruct->lock already happens to contain value 0 */\n SpinLockInit(&somestruct->lock);\n somestruct->lock_ready = true;\n\n\n",
"msg_date": "Tue, 16 Jun 2020 20:24:29 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Hi, \n\nOn June 16, 2020 8:24:29 PM PDT, Noah Misch <noah@leadboat.com> wrote:\n>On Sun, Jun 14, 2020 at 09:16:20PM -0700, Andres Freund wrote:\n>> On 2020-06-14 18:55:27 -0700, Noah Misch wrote:\n>> > Does something guarantee the write will be globally-visible by the\n>time the\n>> > first concurrent accessor shows up?\n>> \n>> The function comments say:\n>> \n>> *\n>> * Has to be done before any concurrent usage..\n>> *\n>> * No barrier semantics.\n>> \n>> \n>> > (If not, one could (a) do an unlocked ptr->value=0, then the atomic\n>> > write, or (b) revert and improve the suppression.) I don't doubt\n>it's\n>> > fine for the ways PostgreSQL uses atomics today, which generally\n>> > initialize an atomic before the concurrent-accessor processes even\n>> > exist.\n>> \n>> I think it's unlikely that there are cases where you could safely\n>> initialize the atomic without needing some form of synchronization\n>> before it can be used. If a barrier were needed, what'd guarantee the\n>> concurrent access happened after the initialization in the first\n>place?\n>\n>Suppose the initializing process does:\n>\n> pg_atomic_init_u64(&somestruct->atomic, 123);\n> somestruct->atomic_ready = true;\n>\n>In released versions, any process observing atomic_ready==true will\n>observe\n>the results of the pg_atomic_init_u64(). After the commit from this\n>thread,\n>that's no longer assured.\n\nWhy did that hold true before? There wasn't a barrier in platforms already (wherever we know what 64 bit reads/writes have single copy atomicity).\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 16 Jun 2020 20:35:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On June 16, 2020 8:24:29 PM PDT, Noah Misch <noah@leadboat.com> wrote:\n>> Suppose the initializing process does:\n>> \n>> pg_atomic_init_u64(&somestruct->atomic, 123);\n>> somestruct->atomic_ready = true;\n>> \n>> In released versions, any process observing atomic_ready==true will\n>> observe\n>> the results of the pg_atomic_init_u64(). After the commit from this\n>> thread,\n>> that's no longer assured.\n\n> Why did that hold true before? There wasn't a barrier in platforms already (wherever we know what 64 bit reads/writes have single copy atomicity).\n\nI'm confused as to why this is even an interesting discussion. If the\ntiming is so tight that another process could possibly observe partially-\ninitialized state in shared memory, how could we have confidence that the\nother process doesn't look before we've initialized the atomic variable or\nspinlock at all?\n\nI think in practice all we need depend on in this area is that fork()\nprovides a full memory barrier.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 23:47:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 08:35:58PM -0700, Andres Freund wrote:\n> On June 16, 2020 8:24:29 PM PDT, Noah Misch <noah@leadboat.com> wrote:\n> >Suppose the initializing process does:\n> >\n> > pg_atomic_init_u64(&somestruct->atomic, 123);\n> > somestruct->atomic_ready = true;\n> >\n> >In released versions, any process observing atomic_ready==true will\n> >observe\n> >the results of the pg_atomic_init_u64(). After the commit from this\n> >thread,\n> >that's no longer assured.\n> \n> Why did that hold true before? There wasn't a barrier in platforms already (wherever we know what 64 bit reads/writes have single copy atomicity).\n\nYou are right. It didn't hold before.\n\n\n",
"msg_date": "Tue, 16 Jun 2020 21:01:21 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: valgrind versus pg_atomic_init()"
}
] |
[
{
"msg_contents": "I want to execute some testing code but I conflict against error.\nPlease tell me how to fix it :)\n\nEnvironment:\n=================================================================\nOSX: 10.14.6\npostgresql: version 13 current development\n\nIn postgresql/src/bin/pg_dump dir I command below\n\n$ make check PROVE_TESTS='t/001_basic.pi'\n\nError:\n================================================================\n/Library/Developer/CommandLineTools/usr/bin/make -C ../../../src/backend\ngenerated-headers\n/Library/Developer/CommandLineTools/usr/bin/make -C catalog distprep\ngenerated-header-symlinks\nmake[2]: Nothing to be done for `distprep'.\nmake[2]: Nothing to be done for `generated-header-symlinks'.\n/Library/Developer/CommandLineTools/usr/bin/make -C utils distprep\ngenerated-header-symlinks\nmake[2]: Nothing to be done for `distprep'.\nmake[2]: Nothing to be done for `generated-header-symlinks'.\nrm -rf '/Users/admin/Documents/Github/postgresql'/tmp_install\n/bin/sh ../../../config/install-sh -c -d\n'/Users/admin/Documents/Github/postgresql'/tmp_install/log\n/Library/Developer/CommandLineTools/usr/bin/make -C '../../..'\nDESTDIR='/Users/admin/Documents/Github/postgresql'/tmp_install install\n>'/Users/admin/Documents/Github/postgresql'/tmp_install/log/install.log 2>&1\n/Library/Developer/CommandLineTools/usr/bin/make -j1 checkprep\n>>'/Users/admin/Documents/Github/postgresql'/tmp_install/log/install.log\n2>&1\nTAP tests not enabled\n\nI want to execute some testing code but I conflict against error.Please tell me how to fix it :)Environment:=================================================================OSX: 10.14.6postgresql: version 13 current developmentIn postgresql/src/bin/pg_dump dir I command below$ make check PROVE_TESTS='t/001_basic.pi'Error: ================================================================/Library/Developer/CommandLineTools/usr/bin/make -C ../../../src/backend generated-headers/Library/Developer/CommandLineTools/usr/bin/make -C catalog distprep generated-header-symlinksmake[2]: Nothing to be done for `distprep'.make[2]: Nothing to be done for `generated-header-symlinks'./Library/Developer/CommandLineTools/usr/bin/make -C utils distprep generated-header-symlinksmake[2]: Nothing to be done for `distprep'.make[2]: Nothing to be done for `generated-header-symlinks'.rm -rf '/Users/admin/Documents/Github/postgresql'/tmp_install/bin/sh ../../../config/install-sh -c -d '/Users/admin/Documents/Github/postgresql'/tmp_install/log/Library/Developer/CommandLineTools/usr/bin/make -C '../../..' DESTDIR='/Users/admin/Documents/Github/postgresql'/tmp_install install >'/Users/admin/Documents/Github/postgresql'/tmp_install/log/install.log 2>&1/Library/Developer/CommandLineTools/usr/bin/make -j1 checkprep >>'/Users/admin/Documents/Github/postgresql'/tmp_install/log/install.log 2>&1TAP tests not enabled",
"msg_date": "Sun, 7 Jun 2020 16:51:08 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "TAP tests not enabled in pg_dump"
},
{
"msg_contents": "On Sun, Jun 7, 2020 at 7:51 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n> TAP tests not enabled\n\nDid you use --enable-tap-tests when running the configure script?\n\n\n",
"msg_date": "Sun, 7 Jun 2020 20:07:10 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests not enabled in pg_dump"
}
] |
[
{
"msg_contents": "Hi,\n\nI'd like to propose $subject, as embodied in the attached patch. This\nmakes it possible to discover and fulfill a need for logical\nreplication that can arise at a time when bouncing the server has\nbecome impractical, i.e. when there is already high demand on it.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 8 Jun 2020 06:38:33 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 8, 2020 at 10:08 AM David Fetter <david@fetter.org> wrote:\n>\n> Hi,\n>\n> I'd like to propose $subject, as embodied in the attached patch. This\n> makes it possible to discover and fulfill a need for logical\n> replication that can arise at a time when bouncing the server has\n> become impractical, i.e. when there is already high demand on it.\n>\n\nI think we should first do performance testing to see what is the\noverhead of making this default. I think pgbench read-write at\nvarious scale factors would be a good starting point. Also, we should\nsee how much additional WAL it generates as compared to current\ndefault.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jun 2020 11:59:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 08, 2020 at 11:59:14AM +0530, Amit Kapila wrote:\n> I think we should first do performance testing to see what is the\n> overhead of making this default. I think pgbench read-write at\n> various scale factors would be a good starting point. Also, we should\n> see how much additional WAL it generates as compared to current\n> default.\n\n+1. I recall that changing wal_level to logical has been discussed in\nthe past and performance was the actual take to debate on.\n--\nMichael",
"msg_date": "Mon, 8 Jun 2020 15:45:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 8, 2020 at 8:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jun 08, 2020 at 11:59:14AM +0530, Amit Kapila wrote:\n> > I think we should first do performance testing to see what is the\n> > overhead of making this default. I think pgbench read-write at\n> > various scale factors would be a good starting point. Also, we should\n> > see how much additional WAL it generates as compared to current\n> > default.\n>\n> +1. I recall that changing wal_level to logical has been discussed in\n> the past and performance was the actual take to debate on.\n>\n\nThat was at least the discussion (long-going and multi-repeated) before we\nupped it from minimal to replica. There were some pretty extensive\nbenchmarking done to prove that the difference was very small, and this was\nweighed against the ability to take basic backups of the system (which\narguably is more important than being able to do logical replication).\n\nI agree that we should consider changing it *if* it does not come with a\nsubstantial overhead, but that has to be shown.\n\nOf course, what would be even neater would be if it could be changed so you\ndon't have to bounce the server to change the wal_level. That's a bigger\nchange though, but perhaps it is now possible once we have the \"global\nbarriers\" in 13?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Jun 8, 2020 at 8:46 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Jun 08, 2020 at 11:59:14AM +0530, Amit Kapila wrote:\n> I think we should first do performance testing to see what is the\n> overhead of making this default. I think pgbench read-write at\n> various scale factors would be a good starting point. Also, we should\n> see how much additional WAL it generates as compared to current\n> default.\n\n+1. I recall that changing wal_level to logical has been discussed in\nthe past and performance was the actual take to debate on.That was at least the discussion (long-going and multi-repeated) before we upped it from minimal to replica. There were some pretty extensive benchmarking done to prove that the difference was very small, and this was weighed against the ability to take basic backups of the system (which arguably is more important than being able to do logical replication).I agree that we should consider changing it *if* it does not come with a substantial overhead, but that has to be shown.Of course, what would be even neater would be if it could be changed so you don't have to bounce the server to change the wal_level. That's a bigger change though, but perhaps it is now possible once we have the \"global barriers\" in 13?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 8 Jun 2020 11:10:38 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 08, 2020 at 11:10:38AM +0200, Magnus Hagander wrote:\n>On Mon, Jun 8, 2020 at 8:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> On Mon, Jun 08, 2020 at 11:59:14AM +0530, Amit Kapila wrote:\n>> > I think we should first do performance testing to see what is the\n>> > overhead of making this default. I think pgbench read-write at\n>> > various scale factors would be a good starting point. Also, we should\n>> > see how much additional WAL it generates as compared to current\n>> > default.\n>>\n>> +1. I recall that changing wal_level to logical has been discussed in\n>> the past and performance was the actual take to debate on.\n>>\n>\n>That was at least the discussion (long-going and multi-repeated) before we\n>upped it from minimal to replica. There were some pretty extensive\n>benchmarking done to prove that the difference was very small, and this was\n>weighed against the ability to take basic backups of the system (which\n>arguably is more important than being able to do logical replication).\n>\n>I agree that we should consider changing it *if* it does not come with a\n>substantial overhead, but that has to be shown.\n>\n\nI agree performance evaluation is necessary, and I'm willing to spend\nsome time on it. But I don't think the difference will be much worse\nthan for the wal_level=replica, at least for common workloads. It's\ncertainly possible to construct workloads with significant impact, due\nto the extra stuff (assignments, cache invalidations and so on).\n\nIn general I think the case is somewhat weaker compared to the replica\ncase, which was required for such basic things like physical backups.\n\n\n>Of course, what would be even neater would be if it could be changed so\n>you don't have to bounce the server to change the wal_level. That's a\n>bigger change though, but perhaps it is now possible once we have the\n>\"global barriers\" in 13?\n>\n\nYeah. That would sidestep a lot of the performance concerns, because if\nswitching from replica to logical is fairly easy / without restart, we\ncould keep the current default.\n\nNot sure if it's sufficient, though, because switching to logical may\nrequire bumping up number of slots, walsenders, etc. At which point you\nactually need a restart. Not to mention that extensions using logical\ndecoding (like pglogical) need to allocate shared memory etc. But for\nthe built-in logical replication that is not an issue, ofc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 15:38:24 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 08, 2020 at 11:10:38AM +0200, Magnus Hagander wrote:\n> On Mon, Jun 8, 2020 at 8:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > On Mon, Jun 08, 2020 at 11:59:14AM +0530, Amit Kapila wrote:\n> > > I think we should first do performance testing to see what is the\n> > > overhead of making this default. I think pgbench read-write at\n> > > various scale factors would be a good starting point. Also, we should\n> > > see how much additional WAL it generates as compared to current\n> > > default.\n> >\n> > +1. I recall that changing wal_level to logical has been discussed in\n> > the past and performance was the actual take to debate on.\n> >\n> \n> That was at least the discussion (long-going and multi-repeated) before we\n> upped it from minimal to replica. There were some pretty extensive\n> benchmarking done to prove that the difference was very small, and this was\n> weighed against the ability to take basic backups of the system (which\n> arguably is more important than being able to do logical replication).\n\nI'd argue this a different direction.\n\nLogical replication has been at fundamental to how a lot of systems\noperate since Slony came out for the very good reason that it was far\nand away the simplest way to accomplish a bunch of design goals. There\nare now, and have been for some years, both free and proprietary\nsystems whose sole purpose is change data capture. PostgreSQL can play\nnicely with those systems with this switch flipped on by default.\n\nLooking into the future of PostgreSQL itself, there are things we've\nbeen unable to do thus far that logical replication makes tractable.\nThese include:\n\n- Zero downtime version changes\n- Substantive changes to our on-disk representations between versions\n because upgrading in place places sharp limits on what we could do.\n\n> I agree that we should consider changing it *if* it does not come\n> with a substantial overhead, but that has to be shown.\n\nWhat overhead would be substantial enough to require more work than\nchanging the default, and under what circumstances?\n\nI ask this because on a heavily loaded system, the kind where\ndifferences could be practical as opposed to merely statistically\nsignificant, statement logging at even the most basic level is a much\nbigger burden than the maxed-out WALs are. Any overhead those WALs\nmight impose simply disappears in the noise. The difference is even\nmore stark in systems subject to audit.\n\n> Of course, what would be even neater would be if it could be changed\n> so you don't have to bounce the server to change the wal_level.\n> That's a bigger change though, but perhaps it is now possible once\n> we have the \"global barriers\" in 13?\n\nAs much as I would love to have this capability, I was hoping to keep\nthe scope of this contained. As pointed out down-thread, there's lots\nmore to doing this dynamically that just turning up the wal_level.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 8 Jun 2020 18:18:38 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On 2020-Jun-08, Tomas Vondra wrote:\n\n> Not sure if it's sufficient, though, because switching to logical may\n> require bumping up number of slots, walsenders, etc. At which point you\n> actually need a restart. Not to mention that extensions using logical\n> decoding (like pglogical) need to allocate shared memory etc. But for\n> the built-in logical replication that is not an issue, ofc.\n\nI think it's reasonable to push our default limits for slots,\nwalsenders, max_bgworkers etc a lot higher than current value (say 10 ->\n100). An unused slot wastes essentially no resources; an unused\nwalsender is just one PGPROC entry. If we did that, and also allowed\nwal_level to be changed on the fly, we wouldn't need to restart in order\nto enable logical replication, so there would be little or no pressure\nto change the wal_level default.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 13:16:19 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I think it's reasonable to push our default limits for slots,\n> walsenders, max_bgworkers etc a lot higher than current value (say 10 ->\n> 100). An unused slot wastes essentially no resources; an unused\n> walsender is just one PGPROC entry. If we did that, and also allowed\n> wal_level to be changed on the fly, we wouldn't need to restart in order\n> to enable logical replication, so there would be little or no pressure\n> to change the wal_level default.\n\nUnused PGPROC entries will still consume semaphores, which is problematic\non at least some OSes. It's not really clear to me why the default for\nwalsenders would need to be O(100) anyway. The existing default of 10\nalready ought to be enough to cover approximately 99.999% of use cases.\n\nIf we can allow wal_level to be changed on the fly, I agree that would\nhelp reduce the pressure to make the default setting more expensive.\nI don't recall why it's PGC_POSTMASTER right now, but I suppose there\nwas a reason for that ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jun 2020 13:27:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 8, 2020 at 1:16 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think it's reasonable to push our default limits for slots,\n> walsenders, max_bgworkers etc a lot higher than current value (say 10 ->\n> 100). An unused slot wastes essentially no resources; an unused\n> walsender is just one PGPROC entry. If we did that, and also allowed\n> wal_level to be changed on the fly, we wouldn't need to restart in order\n> to enable logical replication, so there would be little or no pressure\n> to change the wal_level default.\n\nWouldn't having a whole bunch of extra PGPROC entries have negative\nimplications for the performance of GetSnapshotData() and other things\nthat don't scale well at high connection counts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jun 2020 14:58:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 08, 2020 at 02:58:03PM -0400, Robert Haas wrote:\n> On Mon, Jun 8, 2020 at 1:16 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > I think it's reasonable to push our default limits for slots,\n> > walsenders, max_bgworkers etc a lot higher than current value (say 10 ->\n> > 100). An unused slot wastes essentially no resources; an unused\n> > walsender is just one PGPROC entry. If we did that, and also allowed\n> > wal_level to be changed on the fly, we wouldn't need to restart in order\n> > to enable logical replication, so there would be little or no pressure\n> > to change the wal_level default.\n> \n> Wouldn't having a whole bunch of extra PGPROC entries have negative\n> implications for the performance of GetSnapshotData() and other things\n> that don't scale well at high connection counts?\n> \n\n+1\n\nI think just having the defaults raised enough to allow even a couple DB\nreplication slots would be advantageous and allow it to be used to\naddress spur of the moment needs for systems that need to stay up. It\ndoes seem wasteful to by default support large numbers of slots and\nseems to be contrary to the project stance on initial limits.\n\nRegards,\nKen\n\n\n",
"msg_date": "Mon, 8 Jun 2020 14:05:26 -0500",
"msg_from": "Kenneth Marshall <ktm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 8, 2020 at 5:11 AM Magnus Hagander <magnus@hagander.net> wrote:\n> I agree that we should consider changing it *if* it does not come with a substantial overhead, but that has to be shown.\n\nI think the big overhead is that you log the old version of each row's\nprimary key (or whatever the replica identity is) when performing an\nUPDATE or DELETE. So if you test it with integer keys probably it's\nnot bad, and I suspect (though I haven't looked) that we don't do the\nextra logging when they key hasn't changed. But if you have wide text\ncolumns as keys and you update them a lot then things might not look\nso good. I think in the bad cases for this feature the overhead is\nvastly more than going from minimal to replica.\n\nAs many people here probably know, I am in general skeptical of this\nkind of change. It's based on the premise that reconfiguring the\nsystem is either too hard for users to figure out, or too disruptive\nbecause they'll need a restart. I tend to feel that the first problem\nshould be solved by making it easier to figure out, and the second one\nshould be solved by not requiring a restart. I don't think that's easy\nengineering, because while I think barriers help, they only address\none relatively small aspect of what's probably a pretty difficult\nengineering problem. But the real-life analogue of what's being\nproposed here seems to be \"the people who are buying this house might\nnot be able to figure out how to turn the lights on if they need more\nlight, so let's just turn on all the lights before we hand over the\nkeys, and that way if they just leave them on forever it'll be cool.\"\nTo which any reasonable person would say - \"if your electrical\nswitches are that hard to locate and use, that house has got a serious\ndesign problem.\"\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jun 2020 15:09:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 8, 2020 at 12:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think the big overhead is that you log the old version of each row's\n> primary key (or whatever the replica identity is) when performing an\n> UPDATE or DELETE. So if you test it with integer keys probably it's\n> not bad, and I suspect (though I haven't looked) that we don't do the\n> extra logging when they key hasn't changed. But if you have wide text\n> columns as keys and you update them a lot then things might not look\n> so good. I think in the bad cases for this feature the overhead is\n> vastly more than going from minimal to replica.\n>\n> As many people here probably know, I am in general skeptical of this\n> kind of change. It's based on the premise that reconfiguring the\n> system is either too hard for users to figure out, or too disruptive\n> because they'll need a restart.\n\nI completely agree with your analysis, and your conclusions.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Jun 2020 12:13:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On 2020-Jun-08, Robert Haas wrote:\n\n> On Mon, Jun 8, 2020 at 1:16 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > I think it's reasonable to push our default limits for slots,\n> > walsenders, max_bgworkers etc a lot higher than current value (say 10 ->\n> > 100). An unused slot wastes essentially no resources; an unused\n> > walsender is just one PGPROC entry. If we did that, and also allowed\n> > wal_level to be changed on the fly, we wouldn't need to restart in order\n> > to enable logical replication, so there would be little or no pressure\n> > to change the wal_level default.\n> \n> Wouldn't having a whole bunch of extra PGPROC entries have negative\n> implications for the performance of GetSnapshotData() and other things\n> that don't scale well at high connection counts?\n\nOn a quantum-mechanics level, sure, but after Andres's snapshot\nscalability patches, will it be measurable? (Besides, if your workload\nis so high that you're measurably affected by the additional unused\nPGPROC entries, you can always tune it lower.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 15:27:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Mon, Jun 8, 2020 at 12:28 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On a quantum-mechanics level, sure, but after Andres's snapshot\n> scalability patches, will it be measurable? (Besides, if your workload\n> is so high that you're measurably affected by the additional unused\n> PGPROC entries, you can always tune it lower.)\n\nThe point that Robert went on to make about the increased WAL volume\nfrom logging old primary key (or replica identity) values was a\nstronger argument, IMV.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Jun 2020 13:20:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-08 14:58:03 -0400, Robert Haas wrote:\n> On Mon, Jun 8, 2020 at 1:16 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > I think it's reasonable to push our default limits for slots,\n> > walsenders, max_bgworkers etc a lot higher than current value (say 10 ->\n> > 100). An unused slot wastes essentially no resources; an unused\n> > walsender is just one PGPROC entry. If we did that, and also allowed\n> > wal_level to be changed on the fly, we wouldn't need to restart in order\n> > to enable logical replication, so there would be little or no pressure\n> > to change the wal_level default.\n> \n> Wouldn't having a whole bunch of extra PGPROC entries have negative\n> implications for the performance of GetSnapshotData() and other things\n> that don't scale well at high connection counts?\n\nSome things yes, but I don't think it should have a significant effect\non GetSnapshotData():\n\nWe currently don't touch unused PGPROCs for it (by virtue of\nprocArray->pgprocnos), and we wouldn't with my proposed / pending\nchanges (because the relevant arrays contain data for connected backends\nat the \"front\").\nYou can have some effect if you have temporary spikes to very high\nconnection counts, and then reduce that again. That can lead to a lot of\nunused PGXACT entries being interleaved with used ones, leading to\nhigher cache miss ratios (data cache as well as tlb). But that cannot\nbecome a problem without those PGPROCs ever being used, because IIRC we\notherwise ensure they're used \"densely\".\n\nThere are a few places where we actually look over all PGPROCs, or size\nresources according to that, but I think most of those shouldn't be in\nhot paths.\n\nThere are also effects like the lock hashtables being sized larger,\nwhich then can reduce the cache hit ratio. I guess we could check\nwhether it'd make sense to charge less than max_locks_per_transaction\nfor everything but user processes, but I'm a bit doubtful it's worth it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Jun 2020 14:27:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n> If we can allow wal_level to be changed on the fly, I agree that would\n> help reduce the pressure to make the default setting more expensive.\n> I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n> was a reason for that ...\n\nThere's reasons, but IIRC they're all solvable with reasonable effort. I\nthink most of it boils down to only being able to rely on the new\nwal_level after a while. For minimal->recovery we basically need a\ncheckpoint started after the change in configuration, and for\nrecovery->logical we need to wait until all sessions have a) read the\nnew config setting b) finished the transaction that used the old\nsetting.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Jun 2020 14:32:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On 2020-06-08 23:32, Andres Freund wrote:\n> On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n>> If we can allow wal_level to be changed on the fly, I agree that would\n>> help reduce the pressure to make the default setting more expensive.\n>> I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n>> was a reason for that ...\n> \n> There's reasons, but IIRC they're all solvable with reasonable effort. I\n> think most of it boils down to only being able to rely on the new\n> wal_level after a while. For minimal->recovery we basically need a\n> checkpoint started after the change in configuration, and for\n> recovery->logical we need to wait until all sessions have a) read the\n> new config setting b) finished the transaction that used the old\n> setting.\n\nThe best behavior from a user's perspective would be if the WAL level \nautomatically switched to logical if logical replication slots are \npresent. You might not even need 'logical' as an actual value of \nwal_level anymore, you just need to keep a flag in shared memory that \nrecords whether at least one logical slot exists.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Jun 2020 08:52:24 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "At Tue, 9 Jun 2020 08:52:24 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2020-06-08 23:32, Andres Freund wrote:\n> > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n> >> If we can allow wal_level to be changed on the fly, I agree that would\n> >> help reduce the pressure to make the default setting more expensive.\n> >> I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n> >> was a reason for that ...\n> > There's reasons, but IIRC they're all solvable with reasonable\n> > effort. I\n> > think most of it boils down to only being able to rely on the new\n> > wal_level after a while. For minimal->recovery we basically need a\n> > checkpoint started after the change in configuration, and for\n> > recovery->logical we need to wait until all sessions have a) read the\n> > new config setting b) finished the transaction that used the old\n> > setting.\n> \n> The best behavior from a user's perspective would be if the WAL level\n> automatically switched to logical if logical replication slots are\n> present. You might not even need 'logical' as an actual value of\n> wal_level anymore, you just need to keep a flag in shared memory that\n> records whether at least one logical slot exists.\n\nCurrently logical slots cannot be created while wal_level <\nlogical. Thus a database that has a logical slot must have been once\nexecuted with wal_level >= logical before the creation of the slot.\n\nIt seems to me setting wal_level = logical would be better than\ncreating a dummy logical slot while initdb..\n\nCoulnd't we add an option to speicfy wal_level for initdb? Or an\noption to append arbitrary config lines to postgresql.conf?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 09 Jun 2020 17:53:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 10:53 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 9 Jun 2020 08:52:24 +0200, Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote in\n> > On 2020-06-08 23:32, Andres Freund wrote:\n> > > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n> > >> If we can allow wal_level to be changed on the fly, I agree that would\n> > >> help reduce the pressure to make the default setting more expensive.\n> > >> I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n> > >> was a reason for that ...\n> > > There's reasons, but IIRC they're all solvable with reasonable\n> > > effort. I\n> > > think most of it boils down to only being able to rely on the new\n> > > wal_level after a while. For minimal->recovery we basically need a\n> > > checkpoint started after the change in configuration, and for\n> > > recovery->logical we need to wait until all sessions have a) read the\n> > > new config setting b) finished the transaction that used the old\n> > > setting.\n> >\n> > The best behavior from a user's perspective would be if the WAL level\n> > automatically switched to logical if logical replication slots are\n> > present. You might not even need 'logical' as an actual value of\n> > wal_level anymore, you just need to keep a flag in shared memory that\n> > records whether at least one logical slot exists.\n>\n> Currently logical slots cannot be created while wal_level <\n> logical. Thus a database that has a logical slot must have been once\n> executed with wal_level >= logical before the creation of the slot.\n>\n> It seems to me setting wal_level = logical would be better than\n> creating a dummy logical slot while initdb..\n>\n>\nI don't think Peter is suggesting a dummy slot. What he's suggesting is\nallow the creation of a logical slot even when wal_level is not set to\nlogical, and instead automatically raise the wal level to the equivalent of\nlogical when you do. That way, the operation becomes transparent to the\nuser.\n\n\n\n> Coulnd't we add an option to speicfy wal_level for initdb? Or an\n> option to append arbitrary config lines to postgresql.conf?\n>\n\nThat wouldn't solve the problem David raised initially. The whole reason\nfor being able to do this is that you *didn't*k now when you did initdb\nthat you were going to need logical replication at a later stage. So you\nare already in front of a running cluster with wal_level=replica, and now\nthe cost of turning it to logical includes restarting the db and kicking\nall sessions out. If you know when you're setting the system up that you're\ngoing to need it, then the work of setting it isn't that big.\n\nIt might be useful to have a functionality to append arbitrary config lines\nin initdb, but then that's not all that different from just copying in a\npre-made postgresql.auto.conf file into the newly initialized cluster after\nit's done -- I'm not sure it buys very much.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jun 9, 2020 at 10:53 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Tue, 9 Jun 2020 08:52:24 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2020-06-08 23:32, Andres Freund wrote:\n> > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n> >> If we can allow wal_level to be changed on the fly, I agree that would\n> >> help reduce the pressure to make the default setting more expensive.\n> >> I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n> >> was a reason for that ...\n> > There's reasons, but IIRC they're all solvable with reasonable\n> > effort. I\n> > think most of it boils down to only being able to rely on the new\n> > wal_level after a while. For minimal->recovery we basically need a\n> > checkpoint started after the change in configuration, and for\n> > recovery->logical we need to wait until all sessions have a) read the\n> > new config setting b) finished the transaction that used the old\n> > setting.\n> \n> The best behavior from a user's perspective would be if the WAL level\n> automatically switched to logical if logical replication slots are\n> present. You might not even need 'logical' as an actual value of\n> wal_level anymore, you just need to keep a flag in shared memory that\n> records whether at least one logical slot exists.\n\nCurrently logical slots cannot be created while wal_level <\nlogical. Thus a database that has a logical slot must have been once\nexecuted with wal_level >= logical before the creation of the slot.\n\nIt seems to me setting wal_level = logical would be better than\ncreating a dummy logical slot while initdb..\nI don't think Peter is suggesting a dummy slot. What he's suggesting is allow the creation of a logical slot even when wal_level is not set to logical, and instead automatically raise the wal level to the equivalent of logical when you do. That way, the operation becomes transparent to the user. \nCoulnd't we add an option to speicfy wal_level for initdb? Or an\noption to append arbitrary config lines to postgresql.conf?That wouldn't solve the problem David raised initially. The whole reason for being able to do this is that you *didn't*k now when you did initdb that you were going to need logical replication at a later stage. So you are already in front of a running cluster with wal_level=replica, and now the cost of turning it to logical includes restarting the db and kicking all sessions out. If you know when you're setting the system up that you're going to need it, then the work of setting it isn't that big.It might be useful to have a functionality to append arbitrary config lines in initdb, but then that's not all that different from just copying in a pre-made postgresql.auto.conf file into the newly initialized cluster after it's done -- I'm not sure it buys very much. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 9 Jun 2020 11:01:13 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 2:31 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Jun 9, 2020 at 10:53 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>\n>> At Tue, 9 Jun 2020 08:52:24 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in\n>> > On 2020-06-08 23:32, Andres Freund wrote:\n>> > > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n>> > >> If we can allow wal_level to be changed on the fly, I agree that would\n>> > >> help reduce the pressure to make the default setting more expensive.\n>> > >> I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n>> > >> was a reason for that ...\n>> > > There's reasons, but IIRC they're all solvable with reasonable\n>> > > effort. I\n>> > > think most of it boils down to only being able to rely on the new\n>> > > wal_level after a while. For minimal->recovery we basically need a\n>> > > checkpoint started after the change in configuration, and for\n>> > > recovery->logical we need to wait until all sessions have a) read the\n>> > > new config setting b) finished the transaction that used the old\n>> > > setting.\n>> >\n>> > The best behavior from a user's perspective would be if the WAL level\n>> > automatically switched to logical if logical replication slots are\n>> > present. You might not even need 'logical' as an actual value of\n>> > wal_level anymore, you just need to keep a flag in shared memory that\n>> > records whether at least one logical slot exists.\n>>\n>> Currently logical slots cannot be created while wal_level <\n>> logical. Thus a database that has a logical slot must have been once\n>> executed with wal_level >= logical before the creation of the slot.\n>>\n\nI think the creation of slot would take a lot more time in that case\nas it needs to wait for existing transactions to finish which I feel\ncould be confusing to users. Sure, the cost would have to be incurred\nthe first time but still the user might tempt to cancel such an\noperation if he is not aware of the internals.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jun 2020 16:50:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Jun 9, 2020 at 2:31 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > On Tue, Jun 9, 2020 at 10:53 AM Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote:\n> >>\n> >> At Tue, 9 Jun 2020 08:52:24 +0200, Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote in\n> >> > On 2020-06-08 23:32, Andres Freund wrote:\n> >> > > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n> >> > >> If we can allow wal_level to be changed on the fly, I agree that\n> would\n> >> > >> help reduce the pressure to make the default setting more\n> expensive.\n> >> > >> I don't recall why it's PGC_POSTMASTER right now, but I suppose\n> there\n> >> > >> was a reason for that ...\n> >> > > There's reasons, but IIRC they're all solvable with reasonable\n> >> > > effort. I\n> >> > > think most of it boils down to only being able to rely on the new\n> >> > > wal_level after a while. For minimal->recovery we basically need a\n> >> > > checkpoint started after the change in configuration, and for\n> >> > > recovery->logical we need to wait until all sessions have a) read\n> the\n> >> > > new config setting b) finished the transaction that used the old\n> >> > > setting.\n> >> >\n> >> > The best behavior from a user's perspective would be if the WAL level\n> >> > automatically switched to logical if logical replication slots are\n> >> > present. You might not even need 'logical' as an actual value of\n> >> > wal_level anymore, you just need to keep a flag in shared memory that\n> >> > records whether at least one logical slot exists.\n> >>\n> >> Currently logical slots cannot be created while wal_level <\n> >> logical. Thus a database that has a logical slot must have been once\n> >> executed with wal_level >= logical before the creation of the slot.\n> >>\n>\n> I think the creation of slot would take a lot more time in that case\n> as it needs to wait for existing transactions to finish which I feel\n> could be confusing to users. Sure, the cost would have to be incurred\n> the first time but still the user might tempt to cancel such an\n> operation if he is not aware of the internals.\n>\n\nYeah, I am unsure if this is doable, but I think that's what Peter was\ntrying to explain, because that's what would be most user-friendly. But it\nmay definitely not be worth the complexity, I'm guessing.\n\nBeing able to change wal_level on reload instead of restart would be less\nuser friendly than that, but more user friendly than what we have now.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jun 9, 2020 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Jun 9, 2020 at 2:31 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Jun 9, 2020 at 10:53 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>\n>> At Tue, 9 Jun 2020 08:52:24 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in\n>> > On 2020-06-08 23:32, Andres Freund wrote:\n>> > > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n>> > >> If we can allow wal_level to be changed on the fly, I agree that would\n>> > >> help reduce the pressure to make the default setting more expensive.\n>> > >> I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n>> > >> was a reason for that ...\n>> > > There's reasons, but IIRC they're all solvable with reasonable\n>> > > effort. I\n>> > > think most of it boils down to only being able to rely on the new\n>> > > wal_level after a while. For minimal->recovery we basically need a\n>> > > checkpoint started after the change in configuration, and for\n>> > > recovery->logical we need to wait until all sessions have a) read the\n>> > > new config setting b) finished the transaction that used the old\n>> > > setting.\n>> >\n>> > The best behavior from a user's perspective would be if the WAL level\n>> > automatically switched to logical if logical replication slots are\n>> > present. You might not even need 'logical' as an actual value of\n>> > wal_level anymore, you just need to keep a flag in shared memory that\n>> > records whether at least one logical slot exists.\n>>\n>> Currently logical slots cannot be created while wal_level <\n>> logical. Thus a database that has a logical slot must have been once\n>> executed with wal_level >= logical before the creation of the slot.\n>>\n\nI think the creation of slot would take a lot more time in that case\nas it needs to wait for existing transactions to finish which I feel\ncould be confusing to users. Sure, the cost would have to be incurred\nthe first time but still the user might tempt to cancel such an\noperation if he is not aware of the internals.Yeah, I am unsure if this is doable, but I think that's what Peter was trying to explain, because that's what would be most user-friendly. But it may definitely not be worth the complexity, I'm guessing.Being able to change wal_level on reload instead of restart would be less user friendly than that, but more user friendly than what we have now. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 9 Jun 2020 13:27:50 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 4:58 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Jun 9, 2020 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Jun 9, 2020 at 2:31 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> >\n>> > On Tue, Jun 9, 2020 at 10:53 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> >>\n>> >> At Tue, 9 Jun 2020 08:52:24 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in\n>> >> > On 2020-06-08 23:32, Andres Freund wrote:\n>> >> > > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n>> >> > >> If we can allow wal_level to be changed on the fly, I agree that would\n>> >> > >> help reduce the pressure to make the default setting more expensive.\n>> >> > >> I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n>> >> > >> was a reason for that ...\n>> >> > > There's reasons, but IIRC they're all solvable with reasonable\n>> >> > > effort. I\n>> >> > > think most of it boils down to only being able to rely on the new\n>> >> > > wal_level after a while. For minimal->recovery we basically need a\n>> >> > > checkpoint started after the change in configuration, and for\n>> >> > > recovery->logical we need to wait until all sessions have a) read the\n>> >> > > new config setting b) finished the transaction that used the old\n>> >> > > setting.\n>> >> >\n>> >> > The best behavior from a user's perspective would be if the WAL level\n>> >> > automatically switched to logical if logical replication slots are\n>> >> > present. You might not even need 'logical' as an actual value of\n>> >> > wal_level anymore, you just need to keep a flag in shared memory that\n>> >> > records whether at least one logical slot exists.\n>> >>\n>> >> Currently logical slots cannot be created while wal_level <\n>> >> logical. Thus a database that has a logical slot must have been once\n>> >> executed with wal_level >= logical before the creation of the slot.\n>> >>\n>>\n>> I think the creation of slot would take a lot more time in that case\n>> as it needs to wait for existing transactions to finish which I feel\n>> could be confusing to users. Sure, the cost would have to be incurred\n>> the first time but still the user might tempt to cancel such an\n>> operation if he is not aware of the internals.\n>\n>\n> Yeah, I am unsure if this is doable, but I think that's what Peter was trying to explain, because that's what would be most user-friendly. But it may definitely not be worth the complexity, I'm guessing.\n>\n\nAlso, I think we might need to think shall we allow wal_level to be\nchanged back to replica? If so, how?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jun 2020 18:35:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 3:02 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n> > If we can allow wal_level to be changed on the fly, I agree that would\n> > help reduce the pressure to make the default setting more expensive.\n> > I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n> > was a reason for that ...\n>\n> There's reasons, but IIRC they're all solvable with reasonable effort. I\n> think most of it boils down to only being able to rely on the new\n> wal_level after a while. For minimal->recovery we basically need a\n> checkpoint started after the change in configuration, and for\n> recovery->logical we need to wait until all sessions have a) read the\n> new config setting b) finished the transaction that used the old\n> setting.\n>\n\nWhat if we note down the highest transaction id when we set wal_level\n= logical and won't allow a snapshot in logical decoding to reach a\nconsistent state till we see at least that xid as committed? I think\nthis will mean that it won't allow to decode any transaction which is\noperated with wal_level < logical and that might serve the purpose.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jun 2020 18:57:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 6:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 9, 2020 at 3:02 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n> > > If we can allow wal_level to be changed on the fly, I agree that would\n> > > help reduce the pressure to make the default setting more expensive.\n> > > I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n> > > was a reason for that ...\n> >\n> > There's reasons, but IIRC they're all solvable with reasonable effort. I\n> > think most of it boils down to only being able to rely on the new\n> > wal_level after a while. For minimal->recovery we basically need a\n> > checkpoint started after the change in configuration, and for\n> > recovery->logical we need to wait until all sessions have a) read the\n> > new config setting b) finished the transaction that used the old\n> > setting.\n> >\n>\n> What if we note down the highest transaction id when we set wal_level\n> = logical and won't allow a snapshot in logical decoding to reach a\n> consistent state till we see at least that xid as committed? I think\n> this will mean that it won't allow to decode any transaction which is\n> operated with wal_level < logical and that might serve the purpose.\n>\n\nI intend to say that if the above is possible then we don't need to\nwait for (b).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jun 2020 18:58:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-09 08:52:24 +0200, Peter Eisentraut wrote:\n> On 2020-06-08 23:32, Andres Freund wrote:\n> > On 2020-06-08 13:27:50 -0400, Tom Lane wrote:\n> > > If we can allow wal_level to be changed on the fly, I agree that would\n> > > help reduce the pressure to make the default setting more expensive.\n> > > I don't recall why it's PGC_POSTMASTER right now, but I suppose there\n> > > was a reason for that ...\n> > \n> > There's reasons, but IIRC they're all solvable with reasonable effort. I\n> > think most of it boils down to only being able to rely on the new\n> > wal_level after a while. For minimal->recovery we basically need a\n> > checkpoint started after the change in configuration, and for\n> > recovery->logical we need to wait until all sessions have a) read the\n> > new config setting b) finished the transaction that used the old\n> > setting.\n> \n> The best behavior from a user's perspective would be if the WAL level\n> automatically switched to logical if logical replication slots are present.\n> You might not even need 'logical' as an actual value of wal_level anymore,\n> you just need to keep a flag in shared memory that records whether at least\n> one logical slot exists.\n\nYea, it'd be good to have that. But you'd need the same type of\ncoordination mentioned above, no?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Jun 2020 10:20:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Bump default wal_level to logical"
}
] |
[
{
"msg_contents": "This blocks writes to all partitions until commit:\n\npostgres=# begin; CREATE INDEX ON pt(i);\nBEGIN\nCREATE INDEX\n\nCompare with CLUSTER rel1, rel2, ..., and REINDEX {SCHEMA|DATABASE|SYSTEM},\nwhich release their locks as soon as each rel is processed.\n\nI noticed while implementing REINDEX for partitioned tables, which, it seems\nclear, should also avoid slowly accumulating more and more write locks across\nan entire partition heirarchy.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 8 Jun 2020 04:35:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "should CREATE INDEX ON partitioned_table call\n PreventInTransactionBlock() ?"
},
{
"msg_contents": "On 2020-Jun-08, Justin Pryzby wrote:\n\n> This blocks writes to all partitions until commit:\n> \n> postgres=# begin; CREATE INDEX ON pt(i);\n> BEGIN\n> CREATE INDEX\n> \n> Compare with CLUSTER rel1, rel2, ..., and REINDEX {SCHEMA|DATABASE|SYSTEM},\n> which release their locks as soon as each rel is processed.\n\nWell, that would also require that transactions are committed and\nstarted for each partition. Merely adding PreventInTransactionBlock\nwould not do that. Moreover, since this would break DDL-in-transactions\nthat would otherwise work, it should be optional and thus need a keyword\nin the command. But CONCURRENTLY isn't it (because that means something\nelse) so we'd have to discuss what it would be.\n\n> I noticed while implementing REINDEX for partitioned tables, which, it seems\n> clear, should also avoid slowly accumulating more and more write locks across\n> an entire partition heirarchy.\n\nRight.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 11:27:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: should CREATE INDEX ON partitioned_table call\n PreventInTransactionBlock() ?"
},
{
"msg_contents": "On Mon, Jun 08, 2020 at 11:27:26AM -0400, Alvaro Herrera wrote:\n> On 2020-Jun-08, Justin Pryzby wrote:\n> \n> > This blocks writes to all partitions until commit:\n> > \n> > postgres=# begin; CREATE INDEX ON pt(i);\n> > BEGIN\n> > CREATE INDEX\n> > \n> > Compare with CLUSTER rel1, rel2, ..., and REINDEX {SCHEMA|DATABASE|SYSTEM},\n> > which release their locks as soon as each rel is processed.\n\n(Correcting myself, I guess I mean \"CLUSTER;\" - it doesn't accept multiple\nrelation arguments.)\n\n> Well, that would also require that transactions are committed and\n> started for each partition. Merely adding PreventInTransactionBlock\n> would not do that. Moreover, since this would break DDL-in-transactions\n> that would otherwise work, it should be optional and thus need a keyword\n> in the command. But CONCURRENTLY isn't it (because that means something\n> else) so we'd have to discuss what it would be.\n\nI wasn't thinking of a new feature but rather if it would be desirable to\nchange behavior for v14 to always start/commit transaction for each partition.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 8 Jun 2020 10:40:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should CREATE INDEX ON partitioned_table call\n PreventInTransactionBlock() ?"
},
{
"msg_contents": "On 2020-Jun-08, Justin Pryzby wrote:\n\n> On Mon, Jun 08, 2020 at 11:27:26AM -0400, Alvaro Herrera wrote:\n\n> > Well, that would also require that transactions are committed and\n> > started for each partition. Merely adding PreventInTransactionBlock\n> > would not do that. Moreover, since this would break DDL-in-transactions\n> > that would otherwise work, it should be optional and thus need a keyword\n> > in the command. But CONCURRENTLY isn't it (because that means something\n> > else) so we'd have to discuss what it would be.\n> \n> I wasn't thinking of a new feature but rather if it would be desirable to\n> change behavior for v14 to always start/commit transaction for each partition.\n\nWell, I was saying that I don't think a blanket behavior change is\ndesirable. For example, if you have a script that creates a partitioned\ntable and a few partitions and a few indexes, and it does all that in a\ntransaction, it'll break.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 12:21:44 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: should CREATE INDEX ON partitioned_table call\n PreventInTransactionBlock() ?"
}
] |
[
{
"msg_contents": "The tests\n\nsrc/bin/pg_basebackup/t/010_pg_basebackup.pl\nsrc/bin/pg_rewind/t/004_pg_xlog_symlink.pl\n\nboth contain a TAP skip notice \"symlinks not supported on Windows\".\n\nThis is untrue. Symlinks certainly work on Windows, and we have other \nTAP tests using them, for example for tablespaces.\n\npg_rewind/t/004_pg_xlog_symlink.pl passes for me on Windows if I just \nremove the skip stuff. My attached patch does that.\n\nIf I remove the skip stuff in pg_basebackup/t/010_pg_basebackup.pl, it \nfails in various interesting ways. Apparently, some more work is needed \nto get the various paths handled correct on Windows. (At least a \nTestLib::perl2host call appears to be required.) I don't have the \nenthusiasm to fix this right now, but my patch at least updates the \ncomment from \"this isn't supported\" to \"this doesn't work correctly yet\".\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 8 Jun 2020 14:44:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Mon, Jun 08, 2020 at 02:44:31PM +0200, Peter Eisentraut wrote:\n> both contain a TAP skip notice \"symlinks not supported on Windows\".\n> \n> This is untrue. Symlinks certainly work on Windows, and we have other TAP\n> tests using them, for example for tablespaces.\n\n> pg_rewind/t/004_pg_xlog_symlink.pl passes for me on Windows if I just remove\n> the skip stuff. My attached patch does that.\n\nWhat's the version of your perl installation on Windows? With 5.22, I\nam still seeing that symlink() is not implemented, causing the tests\nof pg_rewind to blow in flight with your patch (MSVC 2015 here).\n--\nMichael",
"msg_date": "Tue, 9 Jun 2020 16:19:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On 2020-06-09 09:19, Michael Paquier wrote:\n> On Mon, Jun 08, 2020 at 02:44:31PM +0200, Peter Eisentraut wrote:\n>> both contain a TAP skip notice \"symlinks not supported on Windows\".\n>>\n>> This is untrue. Symlinks certainly work on Windows, and we have other TAP\n>> tests using them, for example for tablespaces.\n> \n>> pg_rewind/t/004_pg_xlog_symlink.pl passes for me on Windows if I just remove\n>> the skip stuff. My attached patch does that.\n> \n> What's the version of your perl installation on Windows? With 5.22, I\n> am still seeing that symlink() is not implemented, causing the tests\n> of pg_rewind to blow in flight with your patch (MSVC 2015 here).\n\nI was using MSYS2 and the Perl version appears to have been 5.30.2. \nNote sure which one of these two factors makes the difference.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Jun 2020 09:28:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 9:28 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-06-09 09:19, Michael Paquier wrote:\n> > On Mon, Jun 08, 2020 at 02:44:31PM +0200, Peter Eisentraut wrote:\n> >> both contain a TAP skip notice \"symlinks not supported on Windows\".\n> >>\n> >> This is untrue. Symlinks certainly work on Windows, and we have other\n> TAP\n> >> tests using them, for example for tablespaces.\n> >\n> >> pg_rewind/t/004_pg_xlog_symlink.pl passes for me on Windows if I just\n> remove\n> >> the skip stuff. My attached patch does that.\n> >\n> > What's the version of your perl installation on Windows? With 5.22, I\n> > am still seeing that symlink() is not implemented, causing the tests\n> > of pg_rewind to blow in flight with your patch (MSVC 2015 here).\n>\n> I was using MSYS2 and the Perl version appears to have been 5.30.2.\n> Note sure which one of these two factors makes the difference.\n>\n\nThe difference seems to be MSYS2, it also fails for me if I do not\ninclude 'Win32::Symlink' with Perl 5.30.2.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Jun 9, 2020 at 9:28 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-06-09 09:19, Michael Paquier wrote:\n> On Mon, Jun 08, 2020 at 02:44:31PM +0200, Peter Eisentraut wrote:\n>> both contain a TAP skip notice \"symlinks not supported on Windows\".\n>>\n>> This is untrue. Symlinks certainly work on Windows, and we have other TAP\n>> tests using them, for example for tablespaces.\n> \n>> pg_rewind/t/004_pg_xlog_symlink.pl passes for me on Windows if I just remove\n>> the skip stuff. My attached patch does that.\n> \n> What's the version of your perl installation on Windows? With 5.22, I\n> am still seeing that symlink() is not implemented, causing the tests\n> of pg_rewind to blow in flight with your patch (MSVC 2015 here).\n\nI was using MSYS2 and the Perl version appears to have been 5.30.2. \nNote sure which one of these two factors makes the difference.The difference seems to be MSYS2, it also fails for me if I do not include 'Win32::Symlink' with Perl 5.30.2.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 9 Jun 2020 09:33:32 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> writes:\n\n> On Tue, Jun 9, 2020 at 9:28 AM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n>\n>> On 2020-06-09 09:19, Michael Paquier wrote:\n>> > On Mon, Jun 08, 2020 at 02:44:31PM +0200, Peter Eisentraut wrote:\n>> >> both contain a TAP skip notice \"symlinks not supported on Windows\".\n>> >>\n>> >> This is untrue. Symlinks certainly work on Windows, and we have other\n>> TAP\n>> >> tests using them, for example for tablespaces.\n>> >\n>> >> pg_rewind/t/004_pg_xlog_symlink.pl passes for me on Windows if I just\n>> remove\n>> >> the skip stuff. My attached patch does that.\n>> >\n>> > What's the version of your perl installation on Windows? With 5.22, I\n>> > am still seeing that symlink() is not implemented, causing the tests\n>> > of pg_rewind to blow in flight with your patch (MSVC 2015 here).\n>>\n>> I was using MSYS2 and the Perl version appears to have been 5.30.2.\n>> Note sure which one of these two factors makes the difference.\n>>\n>\n> The difference seems to be MSYS2, it also fails for me if I do not\n> include 'Win32::Symlink' with Perl 5.30.2.\n\nAmusingly, Win32::Symlink uses a copy of our pgsymlink(), which emulates\nsymlinks via junction points:\n\n https://metacpan.org/source/AUDREYT/Win32-Symlink-0.06/pgsymlink.c\n\nA portable way of using symlinks if possible would be:\n\n # In a BEGIN block because it overrides CORE::GLOBAL::symlink, which\n # only takes effect on code that's compiled after the override is\n # installed. We don't care if it fails, since it works without on\n # some Windows perls.\n BEGIN {\n eval { require Win32::Symlink; Win32::Symlink->import; }\n }\n\n # symlink() throws an exception if t\n if (not eval { symlink(\"\",\"\"); 1; })\n {\n plan skip_all => 'symlinks not supported';\n }\n else\n { \n plan tests => 5;\n }\n\nPlus a note in the Win32 docs that Win32::Symlink may be required to run\nsome tests on some Perl/Windows versions..\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n",
"msg_date": "Tue, 09 Jun 2020 11:26:19 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Tue, Jun 09, 2020 at 11:26:19AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Amusingly, Win32::Symlink uses a copy of our pgsymlink(), which emulates\n> symlinks via junction points:\n> \n> https://metacpan.org/source/AUDREYT/Win32-Symlink-0.06/pgsymlink.c\n\nOh, interesting point. Thanks for the reference!\n\n> A portable way of using symlinks if possible would be:\n> \n> # In a BEGIN block because it overrides CORE::GLOBAL::symlink, which\n> # only takes effect on code that's compiled after the override is\n> # installed. We don't care if it fails, since it works without on\n> # some Windows perls.\n> [...]\n> \n> Plus a note in the Win32 docs that Win32::Symlink may be required to run\n> some tests on some Perl/Windows versions..\n\nPlanting such a check in individual scripts is not a good idea because\nit would get forgotten. The best way to handle that is to add a new\ncheck in the BEGIN block of TestLib.pm. Note that we already do that\nwith createFile, OsFHandleOpen and CloseHandle. Now the question is:\ndo we really want to make this a hard requirement? I would like to\nanswer yes so as we make sure that this gets always tested, and this\nneeds proper documentation as you say. Now it would be also possible\nto check if the API is present in the BEGIN block of TestLib.pm, and\nthen use an independent variable similar to what we do with\n$use_unix_sockets to decide if tests should be skipped or not, but you\ncannot know if this gets actually, or ever, tested.\n--\nMichael",
"msg_date": "Fri, 12 Jun 2020 15:59:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 9:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jun 09, 2020 at 11:26:19AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> > Plus a note in the Win32 docs that Win32::Symlink may be required to run\n> > some tests on some Perl/Windows versions..\n>\n> Planting such a check in individual scripts is not a good idea because\n> it would get forgotten. The best way to handle that is to add a new\n> check in the BEGIN block of TestLib.pm. Note that we already do that\n> with createFile, OsFHandleOpen and CloseHandle. Now the question is:\n> do we really want to make this a hard requirement? I would like to\n> answer yes so as we make sure that this gets always tested, and this\n> needs proper documentation as you say. Now it would be also possible\n> to check if the API is present in the BEGIN block of TestLib.pm, and\n> then use an independent variable similar to what we do with\n> $use_unix_sockets to decide if tests should be skipped or not, but you\n> cannot know if this gets actually, or ever, tested.\n\n\nThe first thing that comes to mind is adding an option to vcregress to\nchoose whether symlinks will be tested or skipped, would that be an\nacceptable solution?\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>\n\nOn Fri, Jun 12, 2020 at 9:00 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jun 09, 2020 at 11:26:19AM +0100, Dagfinn Ilmari Mannsåker wrote:> Plus a note in the Win32 docs that Win32::Symlink may be required to run\n> some tests on some Perl/Windows versions..\n\nPlanting such a check in individual scripts is not a good idea because\nit would get forgotten. The best way to handle that is to add a new\ncheck in the BEGIN block of TestLib.pm. Note that we already do that\nwith createFile, OsFHandleOpen and CloseHandle. Now the question is:\ndo we really want to make this a hard requirement? I would like to\nanswer yes so as we make sure that this gets always tested, and this\nneeds proper documentation as you say. Now it would be also possible\nto check if the API is present in the BEGIN block of TestLib.pm, and\nthen use an independent variable similar to what we do with\n$use_unix_sockets to decide if tests should be skipped or not, but you\ncannot know if this gets actually, or ever, tested.The first thing that comes to mind is adding an option to vcregress to choose whether symlinks will be tested or skipped, would that be an acceptable solution?Regards,Juan José Santamaría Flecha",
"msg_date": "Fri, 12 Jun 2020 14:02:52 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 02:02:52PM +0200, Juan José Santamaría Flecha wrote:\n> The first thing that comes to mind is adding an option to vcregress to\n> choose whether symlinks will be tested or skipped, would that be an\n> acceptable solution?\n\nMy take would be to actually enforce that as a requirement for 14~ if\nthat works reliably, and of course not backpatch that change as that's\nclearly an improvement and not a bug fix. It would be good to check\nthe status of each buildfarm member first though. And I would need to\nalso check my own stuff to begin with..\n--\nMichael",
"msg_date": "Sat, 13 Jun 2020 15:00:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Sat, Jun 13, 2020 at 03:00:54PM +0900, Michael Paquier wrote:\n> My take would be to actually enforce that as a requirement for 14~ if\n> that works reliably, and of course not backpatch that change as that's\n> clearly an improvement and not a bug fix. It would be good to check\n> the status of each buildfarm member first though. And I would need to\n> also check my own stuff to begin with..\n\nSo, I have been looking at that. And indeed as Peter said we are\nvisibly missing one call to perl2host in 010_pg_basebackup.pl.\n\nAnother thing I spotted is that Win32::Symlink does not allow to\ndetect properly if a path is a symlink using -l, causing one of the\ntests of pg_basebackup to fail when checking if a tablespace path has\nbeen updted. It would be good to get more people to test this patch\nwith different environments than mine. I am also adding Andrew\nDunstan in CC as the owner of the buildfarm animals running currently\nTAP tests for confirmation about the presence of Win32::Symlink\nthere as I am afraid it would cause failures: drongo, fairywen,\njacana and bowerbird.\n--\nMichael",
"msg_date": "Mon, 15 Jun 2020 15:23:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 8:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> Another thing I spotted is that Win32::Symlink does not allow to\n> detect properly if a path is a symlink using -l, causing one of the\n> tests of pg_basebackup to fail when checking if a tablespace path has\n> been updted. It would be good to get more people to test this patch\n> with different environments than mine. I am also adding Andrew\n> Dunstan in CC as the owner of the buildfarm animals running currently\n> TAP tests for confirmation about the presence of Win32::Symlink\n> there as I am afraid it would cause failures: drongo, fairywen,\n> jacana and bowerbird.\n>\n\nThis patch works on my Windows 10 / Visual Studio 2019 / Perl 5.30.2\nmachine.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Jun 15, 2020 at 8:23 AM Michael Paquier <michael@paquier.xyz> wrote:\nAnother thing I spotted is that Win32::Symlink does not allow to\ndetect properly if a path is a symlink using -l, causing one of the\ntests of pg_basebackup to fail when checking if a tablespace path has\nbeen updted. It would be good to get more people to test this patch\nwith different environments than mine. I am also adding Andrew\nDunstan in CC as the owner of the buildfarm animals running currently\nTAP tests for confirmation about the presence of Win32::Symlink\nthere as I am afraid it would cause failures: drongo, fairywen,\njacana and bowerbird.This patch works on my Windows 10 / Visual Studio 2019 / Perl 5.30.2 machine.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 16 Jun 2020 10:10:23 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "\nOn 6/15/20 2:23 AM, Michael Paquier wrote:\n> On Sat, Jun 13, 2020 at 03:00:54PM +0900, Michael Paquier wrote:\n>> My take would be to actually enforce that as a requirement for 14~ if\n>> that works reliably, and of course not backpatch that change as that's\n>> clearly an improvement and not a bug fix. It would be good to check\n>> the status of each buildfarm member first though. And I would need to\n>> also check my own stuff to begin with..\n> So, I have been looking at that. And indeed as Peter said we are\n> visibly missing one call to perl2host in 010_pg_basebackup.pl.\n>\n> Another thing I spotted is that Win32::Symlink does not allow to\n> detect properly if a path is a symlink using -l, causing one of the\n> tests of pg_basebackup to fail when checking if a tablespace path has\n> been updted. It would be good to get more people to test this patch\n> with different environments than mine. I am also adding Andrew\n> Dunstan in CC as the owner of the buildfarm animals running currently\n> TAP tests for confirmation about the presence of Win32::Symlink\n> there as I am afraid it would cause failures: drongo, fairywen,\n> jacana and bowerbird.\n\n\n\nNot one of them has it.\n\n\nI think we'll need a dynamic test for its presence rather than just\nassuming it's there. (Use require in an eval for this).\n\n\nHowever, since all of them would currently fail we wouldn't actually\nhave any test coverage. I could see about installing it on one or two\nanimals (jacana would be a problem, it's using a very old and limited\nperl to run TAP tests.)\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 07:53:26 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On 2020-06-09 09:33, Juan José Santamaría Flecha wrote:\n> The difference seems to be MSYS2, it also fails for me if I do not \n> include 'Win32::Symlink' with Perl 5.30.2.\n\nMSYS2, which is basically Cygwin, emulates symlinks with junction \npoints, so this happens to work for our purpose. We could therefore \nenable these tests in that environment, if we could come up with a \nreliable way to detect it.\n\nAlso, if we are going to add Win32::Symlink to the mix, we should make \nsure things continue to work correctly under MSYS2.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 14:24:23 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "\nOn 6/16/20 8:24 AM, Peter Eisentraut wrote:\n> On 2020-06-09 09:33, Juan José Santamaría Flecha wrote:\n>> The difference seems to be MSYS2, it also fails for me if I do not\n>> include 'Win32::Symlink' with Perl 5.30.2.\n>\n> MSYS2, which is basically Cygwin, emulates symlinks with junction\n> points, so this happens to work for our purpose. We could therefore\n> enable these tests in that environment, if we could come up with a\n> reliable way to detect it.\n\n\n From src/bin/pg_dump/t/010_dump_connstr.pl:\n\n\n if ($^O eq 'msys' && `uname -or` =~ /^[2-9].*Msys/)\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 08:32:03 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 08:32:03AM -0400, Andrew Dunstan wrote:\n> On 6/16/20 8:24 AM, Peter Eisentraut wrote:\n>> MSYS2, which is basically Cygwin, emulates symlinks with junction\n>> points, so this happens to work for our purpose. We could therefore\n>> enable these tests in that environment, if we could come up with a\n>> reliable way to detect it.\n\nHmm. In this case does perl's -l think that a junction point is\ncorrently a soft link or not? We have a check based on that in\npg_basebackup's test and -l fails when it sees to a junction point,\nforcing us to skip this test.\n\n> From src/bin/pg_dump/t/010_dump_connstr.pl:\n> \n> if ($^O eq 'msys' && `uname -or` =~ /^[2-9].*Msys/)\n\nSmart. This could become a central variable in TestLib.pm.\n--\nMichael",
"msg_date": "Wed, 17 Jun 2020 16:41:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 07:53:26AM -0400, Andrew Dunstan wrote:\n> Not one of them has it.\n\nArgh.\n\n> I think we'll need a dynamic test for its presence rather than just\n> assuming it's there. (Use require in an eval for this).\n\nSure. No problem with implementing an automatic detection.\n\n> However, since all of them would currently fail we wouldn't actually\n> have any test coverage. I could see about installing it on one or two\n> animals (jacana would be a problem, it's using a very old and limited\n> perl to run TAP tests.)\n\nOkay. This could be a problem as jacana is proving to have good\ncoverage AFAIK. So it looks like we are really heading in the\ndirection is still skipping the test if there is no support for\nsymlink in the environment. At least that makes less diffs in the\npatch.\n--\nMichael",
"msg_date": "Wed, 17 Jun 2020 16:44:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 04:44:34PM +0900, Michael Paquier wrote:\n> Okay. This could be a problem as jacana is proving to have good\n> coverage AFAIK. So it looks like we are really heading in the\n> direction is still skipping the test if there is no support for\n> symlink in the environment. At least that makes less diffs in the\n> patch.\n\nI have implemented a patch based on the feedback received that does\nthe following, tested with all three patterns (MSVC only on Windows):\n- Assume that all non-Windows platform have a proper symlink\nimplementation for perl.\n- If on Windows, check for the presence of Win32::Symlink:\n-- If the module is not detected, skip the tests not supported.\n-- If the module is detected, run them.\n\nI have added this patch to the next commit fest:\nhttps://commitfest.postgresql.org/28/2612/\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 23 Jun 2020 19:55:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On 2020-06-23 12:55, Michael Paquier wrote:\n> I have implemented a patch based on the feedback received that does\n> the following, tested with all three patterns (MSVC only on Windows):\n> - Assume that all non-Windows platform have a proper symlink\n> implementation for perl.\n> - If on Windows, check for the presence of Win32::Symlink:\n> -- If the module is not detected, skip the tests not supported.\n> -- If the module is detected, run them.\n\nWe should be more accurate about things like this:\n\n+# The following tests test symlinks. Windows may not have symlinks, so\n+# skip there.\n\nThe issue isn't whether Windows has symlinks, since all versions of \nWindows supported by PostgreSQL do (AFAIK). The issue is only whether \nthe Perl installation that runs the tests has symlink support. And that \nis only necessary if the test itself wants to create or inspect \nsymlinks. For example, there are existing tests involving tablespaces \nthat work just fine on Windows.\n\nRelatedly, your patch ends up skipping the tests on MSYS2, even though \nPerl supports symlinks there out of the box.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Jun 2020 14:00:37 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Fri, Jun 26, 2020 at 02:00:37PM +0200, Peter Eisentraut wrote:\n> We should be more accurate about things like this:\n> \n> +# The following tests test symlinks. Windows may not have symlinks, so\n> +# skip there.\n> \n> The issue isn't whether Windows has symlinks, since all versions of Windows\n> supported by PostgreSQL do (AFAIK). The issue is only whether the Perl\n> installation that runs the tests has symlink support. And that is only\n> necessary if the test itself wants to create or inspect symlinks. For\n> example, there are existing tests involving tablespaces that work just fine\n> on Windows.\n\nCheck. Indeed that sounds confusing.\n\n> Relatedly, your patch ends up skipping the tests on MSYS2, even though Perl\n> supports symlinks there out of the box.\n\nDo you think that it would be enough to use what Andrew has mentioned\nin [1]? I don't have a MSYS2 installation, so I am unfortunately not\nable to confirm that, but I would just move the check to TestLib.pm\nand save it in an extra variable.\n\n[1]: https://www.postgresql.org/message-id/6c5ffed0-20ee-8878-270f-ab56b7023802@2ndQuadrant.com\n--\nMichael",
"msg_date": "Mon, 29 Jun 2020 16:56:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Mon, Jun 29, 2020 at 04:56:16PM +0900, Michael Paquier wrote:\n> On Fri, Jun 26, 2020 at 02:00:37PM +0200, Peter Eisentraut wrote:\n>> We should be more accurate about things like this:\n>> \n>> +# The following tests test symlinks. Windows may not have symlinks, so\n>> +# skip there.\n>> \n>> The issue isn't whether Windows has symlinks, since all versions of Windows\n>> supported by PostgreSQL do (AFAIK). The issue is only whether the Perl\n>> installation that runs the tests has symlink support. And that is only\n>> necessary if the test itself wants to create or inspect symlinks. For\n>> example, there are existing tests involving tablespaces that work just fine\n>> on Windows.\n> \n> Check. Indeed that sounds confusing.\n\nAttached is an updated patch, where I have tried to use a better\nwording in all the code paths involved.\n\n>> Relatedly, your patch ends up skipping the tests on MSYS2, even though Perl\n>> supports symlinks there out of the box.\n> \n> Do you think that it would be enough to use what Andrew has mentioned\n> in [1]? I don't have a MSYS2 installation, so I am unfortunately not\n> able to confirm that, but I would just move the check to TestLib.pm\n> and save it in an extra variable.\n\nAdded an extra $is_msys2 to track that in TestLib.pm. One thing I am\nnot sure of though: Win32::Symlink fails to work properly with -l, but\nis that the case with MSYS2? If that's able to work, it would be\npossible to not skip the following test but I have taken the most\ncareful approach for now:\n+ # This symlink check is not supported on Windows. Win32::Symlink works\n+ # around this situation by using junction points (actually PostgreSQL\n+ # approach on the problem), and -l is not able to detect that situation.\n+ SKIP:\n+ {\n+ skip \"symlink check not implemented on Windows\", 1\n+ if ($windows_os)\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 30 Jun 2020 21:13:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On 2020-06-30 14:13, Michael Paquier wrote:\n> Attached is an updated patch, where I have tried to use a better\n> wording in all the code paths involved.\n\nThis new patch doesn't work for me on MSYS2 yet.\n\nIt fails right now in 010_pg_basebackup.pl at\n\n my $realTsDir = TestLib::perl2host(\"$shorter_tempdir/tblspc1\");\n\nwith chdir: No such file or directory. Because perl2host requires the \nparent directory of the argument to exist, but here it doesn't.\n\nIf I add\n\n mkdir $shorter_tempdir;\n\nabove it, then it proceeds past that point, but then the CREATE \nTABLESPACE command fails with No such file or directory. I think the call\n\n symlink \"$tempdir\", $shorter_tempdir;\n\ncreates a directory inside $shorter_tempdir, since it now exists, per my \nabove change, rather than in place of $shorter_tempdir.\n\nI think all of this is still a bit too fragile it needs further \nconsideration.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jul 2020 16:11:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "\nOn 7/3/20 10:11 AM, Peter Eisentraut wrote:\n> On 2020-06-30 14:13, Michael Paquier wrote:\n>> Attached is an updated patch, where I have tried to use a better\n>> wording in all the code paths involved.\n>\n> This new patch doesn't work for me on MSYS2 yet.\n>\n> It fails right now in 010_pg_basebackup.pl at\n>\n> ��� my $realTsDir����� = TestLib::perl2host(\"$shorter_tempdir/tblspc1\");\n>\n> with chdir: No such file or directory.� Because perl2host requires the\n> parent directory of the argument to exist, but here it doesn't.\n>\n> If I add\n>\n> ��� mkdir $shorter_tempdir;\n>\n> above it, then it proceeds past that point, but then the CREATE\n> TABLESPACE command fails with No such file or directory.� I think the\n> call\n>\n> ��� symlink \"$tempdir\", $shorter_tempdir;\n>\n> creates a directory inside $shorter_tempdir, since it now exists, per\n> my above change, rather than in place of $shorter_tempdir.\n>\n> I think all of this is still a bit too fragile it needs further\n> consideration.\n\n\n\nI'll see what I can come up with.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 5 Jul 2020 08:18:46 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Sun, Jul 05, 2020 at 08:18:46AM -0400, Andrew Dunstan wrote:\n> On 7/3/20 10:11 AM, Peter Eisentraut wrote:\n>> I think all of this is still a bit too fragile it needs further\n>> consideration.\n\nIndeed. I would need a MSYS2 environment to dig into that. This\nlooks trickier than what I am used to on Windows.\n\n> I'll see what I can come up with.\n\nThanks, Andrew.\n--\nMichael",
"msg_date": "Mon, 6 Jul 2020 09:53:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "\nOn 7/3/20 10:11 AM, Peter Eisentraut wrote:\n> On 2020-06-30 14:13, Michael Paquier wrote:\n>> Attached is an updated patch, where I have tried to use a better\n>> wording in all the code paths involved.\n>\n> This new patch doesn't work for me on MSYS2 yet.\n>\n> It fails right now in 010_pg_basebackup.pl at\n>\n> ��� my $realTsDir����� = TestLib::perl2host(\"$shorter_tempdir/tblspc1\");\n>\n> with chdir: No such file or directory.� Because perl2host requires the\n> parent directory of the argument to exist, but here it doesn't.\n\n\n\nYeah. I have a fix for that, which also checks to see if the grandparent\ndirectory exists:\n\n\n-�������������� chdir $parent or die \"could not chdir \\\"$parent\\\": $!\";\n+�������������� if (! chdir $parent)\n+�������������� {\n+���������������������� $leaf = '/' . basename ($parent) . $leaf;\n+���������������������� $parent = dirname $parent;\n+���������������������� chdir $parent or die \"could not chdir\n\\\"$parent\\\": $!\";\n+�������������� }\n\n\nWe could generalize it to walk all the way up the path, but I think this\nis sufficient.\n\n\nIncidentally, perl2host is arguably a bad name for this routine - there\nis nothing perl-specific about the paths, they are provided by the msys\nenvironment. Maybe virtual2host or some such would be a better name, or\neven just host_path or native_path.\n\n\n>\n> If I add\n>\n> ��� mkdir $shorter_tempdir;\n>\n> above it, then it proceeds past that point, but then the CREATE\n> TABLESPACE command fails with No such file or directory.� I think the\n> call\n>\n> ��� symlink \"$tempdir\", $shorter_tempdir;\n>\n> creates a directory inside $shorter_tempdir, since it now exists, per\n> my above change, rather than in place of $shorter_tempdir.\n>\n> I think all of this is still a bit too fragile it needs further\n> consideration.\n\n\n\nThe symlink built into msys2 perl is distinctly fragile. I was only able\nto get it to work by doing this:\n\n\n+������ mkdir \"$tempdir/tblspc1\";\n+������ mkdir \"$tempdir/tbl=spc2\";\n+������ mkdir \"$tempdir/$superlongname\";\n+������ open (my $x, '>', \"$tempdir/tblspc1/stuff\") || die $!; print $x\n\"hi\\n\"; close($x);\n+������ open ($x, '>', \"$tempdir/tbl=spc2/stuff\") || die $!; print $x\n\"hi\\n\"; close($x);\n+������ open ($x, '>', \"$tempdir/$superlongname/stuff\") || die $!; print\n$x \"hi\\n\"; close($x);\n������� symlink \"$tempdir\", $shorter_tempdir;\n+������ unlink \"$tempdir/tblspc1/stuff\";\n+������ unlink \"$tempdir/tbl=spc2/stuff\";\n+������ unlink \"$tempdir/$superlongname/stuff\";\n\n\nwhich is sufficiently ugly that I don't think we should contemplate it.\n\n\nLuckily there is an alternative, which doesn't require the use of\nWin32::Symlink. Windows can be made to create junctions that function\nexactly as expected quite easily - it's a builtin of the cmd.exe\nprocessor, and it's been supported in at least every release since\nWindows Vista. Here's a perl function that calls it:\n\n\nsub dsymlink\n{\n������� my $oldname = shift;\n������� my $newname = shift;\n������� if ($windows_os)\n������� {\n��������������� $oldname = TestLib::perl2host($oldname);\n��������������� $newname = TestLib::perl2host($newname);\n��������������� $oldname =~ s,/,\\\\,g;\n��������������� $newname =~ s,/,\\\\,g;\n��������������� my $cmd = \"cmd //c 'mklink /j $newname $oldname'\";\n��������������� system($cmd);\n������� }\n������� else\n������� {\n��������������� symlink $oldname, $newname;\n������� }\n������� die \"No $newname\" unless -e $newname;\n}\n\n\nIt might need a little more quoting to be more robust.\n\n\nGive that, we can simply replace\n\n\n symlink \"$tempdir\", $shorter_tempdir;\n\n\nwith\n\n\n dsymlink \"$tempdir\", $shorter_tempdir;\n\n\nThen, with a little more sprinkling of perl2host the pg_basebackup tests\ncan be made to work on msys2.\n\n\nI'm going to prepare patches along these lines.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 8 Jul 2020 09:54:35 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Wed, Jul 8, 2020 at 3:54 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> Incidentally, perl2host is arguably a bad name for this routine - there\n> is nothing perl-specific about the paths, they are provided by the msys\n> environment. Maybe virtual2host or some such would be a better name, or\n> even just host_path or native_path.\n>\n\nThere is a utility cygpath [1] meant for the conversion between Unix and\nWindows path formats, that might be a meaningful name also.\n\n[1] http://cygwin.net/cygwin-ug-net/cygpath.html\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Jul 8, 2020 at 3:54 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:Incidentally, perl2host is arguably a bad name for this routine - there\nis nothing perl-specific about the paths, they are provided by the msys\nenvironment. Maybe virtual2host or some such would be a better name, or\neven just host_path or native_path.There is a utility cygpath [1] meant for the conversion between Unix and Windows path formats, that might be a meaningful name also.[1] http://cygwin.net/cygwin-ug-net/cygpath.htmlRegards,Juan José Santamaría Flecha",
"msg_date": "Wed, 8 Jul 2020 17:07:08 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "\nOn 7/8/20 11:07 AM, Juan José Santamaría Flecha wrote:\n>\n> On Wed, Jul 8, 2020 at 3:54 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n>\n>\n> Incidentally, perl2host is arguably a bad name for this routine -\n> there\n> is nothing perl-specific about the paths, they are provided by the\n> msys\n> environment. Maybe virtual2host or some such would be a better\n> name, or\n> even just host_path or native_path.\n>\n>\n> There is a utility cygpath [1] meant for the conversion between Unix\n> and Windows path formats, that might be a meaningful name also.\n>\n> [1] http://cygwin.net/cygwin-ug-net/cygpath.html\n>\n>\n\n\nOh, good find. But unfortunately it's not present in my msys1 installations.\n\n\nSo we'll still need to use the 'pwd -W` trick for those.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 8 Jul 2020 12:16:25 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On 2020-Jul-08, Andrew Dunstan wrote:\n\n> On 7/8/20 11:07 AM, Juan Jos� Santamar�a Flecha wrote:\n\n> > There is a utility cygpath [1] meant for the conversion between Unix\n> > and Windows path formats, that might be a meaningful�name also.\n> >\n> > [1]�http://cygwin.net/cygwin-ug-net/cygpath.html\n> \n> Oh, good find. But unfortunately it's not present in my msys1 installations.\n> \n> So we'll still need to use the 'pwd -W` trick for those.\n\nI think his point is not to use that utility, just to use its name as\nthe name of the perl routine.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jul 2020 12:22:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "\nOn 7/8/20 12:22 PM, Alvaro Herrera wrote:\n> On 2020-Jul-08, Andrew Dunstan wrote:\n>\n>> On 7/8/20 11:07 AM, Juan José Santamaría Flecha wrote:\n>>> There is a utility cygpath [1] meant for the conversion between Unix\n>>> and Windows path formats, that might be a meaningful name also.\n>>>\n>>> [1] http://cygwin.net/cygwin-ug-net/cygpath.html\n>> Oh, good find. But unfortunately it's not present in my msys1 installations.\n>>\n>> So we'll still need to use the 'pwd -W` trick for those.\n> I think his point is not to use that utility, just to use its name as\n> the name of the perl routine.\n>\n\n\nThat would be wholly misleading, since it's not needed at all when we're\nrunning running under cygwin.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 8 Jul 2020 13:18:00 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Wed, Jul 8, 2020 at 7:18 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 7/8/20 12:22 PM, Alvaro Herrera wrote:\n> > On 2020-Jul-08, Andrew Dunstan wrote:\n> >\n> >> On 7/8/20 11:07 AM, Juan José Santamaría Flecha wrote:\n> >>> There is a utility cygpath [1] meant for the conversion between Unix\n> >>> and Windows path formats, that might be a meaningful name also.\n> >>>\n> >>> [1] http://cygwin.net/cygwin-ug-net/cygpath.html\n> >> Oh, good find. But unfortunately it's not present in my msys1\n> installations.\n> >>\n> >> So we'll still need to use the 'pwd -W` trick for those.\n> > I think his point is not to use that utility, just to use its name as\n> > the name of the perl routine.\n>\n> That would be wholly misleading, since it's not needed at all when we're\n> running running under cygwin.\n>\n\nMSYS does not include cygpath(), but MSYS2 does. I see why the name could\nbe confusing outside cygwin, but that is a given and it would point to a\nutility that it mimics. Maybe a note for future reference could be enough.\n\nmsys_to_windows_path() seems too long, but is hard to misunderstand.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Jul 8, 2020 at 7:18 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 7/8/20 12:22 PM, Alvaro Herrera wrote:\n> On 2020-Jul-08, Andrew Dunstan wrote:\n>\n>> On 7/8/20 11:07 AM, Juan José Santamaría Flecha wrote:\n>>> There is a utility cygpath [1] meant for the conversion between Unix\n>>> and Windows path formats, that might be a meaningful name also.\n>>>\n>>> [1] http://cygwin.net/cygwin-ug-net/cygpath.html\n>> Oh, good find. But unfortunately it's not present in my msys1 installations.\n>>\n>> So we'll still need to use the 'pwd -W` trick for those.\n> I think his point is not to use that utility, just to use its name as\n> the name of the perl routine.\nThat would be wholly misleading, since it's not needed at all when we're\nrunning running under cygwin.MSYS does not include cygpath(), but MSYS2 does. I see why the name could be confusing outside cygwin, but that is a given and it would point to a utility that it mimics. Maybe a note for future reference could be enough.msys_to_windows_path() seems too long, but is hard to misunderstand.Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 9 Jul 2020 06:25:10 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On 7/8/20 9:54 AM, Andrew Dunstan wrote:\n>\n>\n>\n> Then, with a little more sprinkling of perl2host the pg_basebackup tests\n> can be made to work on msys2.\n>\n>\n> I'm going to prepare patches along these lines.\n>\n>\n\n\n\nAfter much frustration and gnashing of teeth here's a patch that allows\nalmost all the TAP tests involving symlinks to work as expected on all\nWindows build environments, without requiring an additional Perl module.\nI have tested this on a system that is very similar to that running\ndrongo and fairywren, with both msys2 and� MSVC builds.\n\n\nI didn't change the name of perl2host - Sufficient unto the day is the\nevil thereof. But I did modify it� a) to allow use of cygpath if\navailable and b) to allow it to succeed if the grandparent directory\nexists when cygpath isn't available.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 10 Jul 2020 07:58:02 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Fri, Jul 10, 2020 at 07:58:02AM -0400, Andrew Dunstan wrote:\n> After much frustration and gnashing of teeth here's a patch that allows\n> almost all the TAP tests involving symlinks to work as expected on all\n> Windows build environments, without requiring an additional Perl module.\n> I have tested this on a system that is very similar to that running\n> drongo and fairywren, with both msys2 and MSVC builds.\n\nThanks Andrew for looking at the part with MSYS. The tests pass for\nme with MSVC. The trick with mklink is cool. I have not considered\nthat, and the test code gets simpler.\n\n+ my $cmd = qq{mklink /j \"$newname\" \"$oldname\"};\n+ if ($Config{osname} eq 'msys')\n+ {\n+ # need some indirection on msys\n+ $cmd = qq{echo '$cmd' | \\$COMSPEC /Q};\n+ }\n+ note(\"dir_symlink cmd: $cmd\");\n+ system($cmd);\nFrom the quoting perspective, wouldn't it be simpler to build an array\nwith all those arguments and call system() with @cmd?\n\n+# Create files that look like temporary relations to ensure they are ignored\n+# in a tablespace.\n+my @tempRelationFiles = qw(t888_888 t888888_888888_vm.1);\nThis variable conflicts with a previous declaration, creating a\nwarning.\n\n+ skip \"symlink check not implemented on Windows\", 1\n+ if ($windows_os);\n opendir(my $dh, \"$pgdata/pg_tblspc\") or die;\nI think that this would be cleaner with a SKIP block.\n\n+Portably create a symlink for a director. On Windows this creates a junction.\n+Elsewhere it just calls perl's builtin symlink.\ns/director/directory/\ns/junction/junction point/\n\n <para>\n The TAP tests require the Perl module <literal>IPC::Run</literal>.\n This module is available from CPAN or an operating system package.\n+ On Windows, <literal>Win32API::File</literal> is also required .\n </para>\nThis part should be backpatched IMO.\n\nSome of the indentation is weird, this needs a cleanup with perltidy.\n--\nMichael",
"msg_date": "Tue, 14 Jul 2020 14:31:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On 2020-07-10 13:58, Andrew Dunstan wrote:\n> After much frustration and gnashing of teeth here's a patch that allows\n> almost all the TAP tests involving symlinks to work as expected on all\n> Windows build environments, without requiring an additional Perl module.\n> I have tested this on a system that is very similar to that running\n> drongo and fairywren, with both msys2 and� MSVC builds.\n\nThanks. This patch works for me in my environment. The code changes \nlook very clean, so it seems like a good improvement.\n\nAttached is a small fixup patch for some typos and a stray debug message.\n\nA perltidy run might be worthwhile, as Michael already mentioned.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 14 Jul 2020 09:14:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On 7/14/20 1:31 AM, Michael Paquier wrote:\n> On Fri, Jul 10, 2020 at 07:58:02AM -0400, Andrew Dunstan wrote:\n>> After much frustration and gnashing of teeth here's a patch that allows\n>> almost all the TAP tests involving symlinks to work as expected on all\n>> Windows build environments, without requiring an additional Perl module.\n>> I have tested this on a system that is very similar to that running\n>> drongo and fairywren, with both msys2 and MSVC builds.\n> Thanks Andrew for looking at the part with MSYS. The tests pass for\n> me with MSVC. The trick with mklink is cool. I have not considered\n> that, and the test code gets simpler.\n>\n> + my $cmd = qq{mklink /j \"$newname\" \"$oldname\"};\n> + if ($Config{osname} eq 'msys')\n> + {\n> + # need some indirection on msys\n> + $cmd = qq{echo '$cmd' | \\$COMSPEC /Q};\n> + }\n> + note(\"dir_symlink cmd: $cmd\");\n> + system($cmd);\n> From the quoting perspective, wouldn't it be simpler to build an array\n> with all those arguments and call system() with @cmd?\n\n\n\nThis is the simplest invocation I found to be reliable on msys2 (and it\ntook me a long time to find). If you have a tested alternative please\nlet me know.\n\n\n> +# Create files that look like temporary relations to ensure they are ignored\n> +# in a tablespace.\n> +my @tempRelationFiles = qw(t888_888 t888888_888888_vm.1);\n> This variable conflicts with a previous declaration, creating a\n> warning.\n>\n> + skip \"symlink check not implemented on Windows\", 1\n> + if ($windows_os);\n> opendir(my $dh, \"$pgdata/pg_tblspc\") or die;\n> I think that this would be cleaner with a SKIP block.\n\n\n\nI don't understand this comment. The skip statement here is in a SKIP\nblock. In fact skip only works inside SKIP blocks. (perldoc Test::More\nfor details). Maybe you got confused by the diff format.\n\n\n>\n> +Portably create a symlink for a director. On Windows this creates a junction.\n> +Elsewhere it just calls perl's builtin symlink.\n> s/director/directory/\n> s/junction/junction point/\n\n\n\nfixed.\n\n\n>\n> <para>\n> The TAP tests require the Perl module <literal>IPC::Run</literal>.\n> This module is available from CPAN or an operating system package.\n> + On Windows, <literal>Win32API::File</literal> is also required .\n> </para>\n> This part should be backpatched IMO.\n\n\n\nI will do this in� a separate backpatched commit.\n\n\n\n>\n> Some of the indentation is weird, this needs a cleanup with perltidy.\n\n\nDone.\n\n\nRevised patch attached.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 15 Jul 2020 11:04:28 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Wed, Jul 15, 2020 at 11:04:28AM -0400, Andrew Dunstan wrote:\n> This is the simplest invocation I found to be reliable on msys2 (and it\n> took me a long time to find). If you have a tested alternative please\n> let me know.\n\nHaving a working MSYS environment is still on my TODO list :)\n\n> I don't understand this comment. The skip statement here is in a SKIP\n> block. In fact skip only works inside SKIP blocks. (perldoc Test::More\n> for details). Maybe you got confused by the diff format.\n\nIndeed, I got trapped by the diff here. Thanks.\n\nThe patch looks good to me.\n--\nMichael",
"msg_date": "Thu, 16 Jul 2020 19:21:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
},
{
"msg_contents": "On Thu, Jul 16, 2020 at 07:21:27PM +0900, Michael Paquier wrote:\n> The patch looks good to me.\n\nFor the sake of the archives, this has been applied as d66b23b0 and\nthe buildfarm is green. I have also changed the related CF entry to\nreflect what has been done, with Andrew as author, etc:\nhttps://commitfest.postgresql.org/28/2612/\n--\nMichael",
"msg_date": "Fri, 17 Jul 2020 09:15:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests and symlinks on Windows"
}
] |
[
{
"msg_contents": "This adds support for using non-default huge page sizes for shared\nmemory. This is achived via the new \"huge_page_size\" config entry.\nThe config value defaults to 0, meaning it will use the system default.\n---\n\nThis would be very helpful when running in kubernetes since nodes may\nsupport multiple huge page sizes, and have pre-allocated huge page meory\nfor each size. This lets the user select huge page size without having\nto change the default huge page size on the node. This will also be\nuseful when doing benchmarking with different huge page sizes, since it\nwouldn't require a full system reboot.\n\nSince the default value of the new config is 0 (resulting in using the\ndefault huge page size) this should be backwards compatible with old\nconfigs.\n\nFeel free to comment on the phrasing (both in docs and code) and on the\noverall change.\n\n doc/src/sgml/config.sgml | 25 ++++++\n doc/src/sgml/runtime.sgml | 41 +++++----\n src/backend/port/sysv_shmem.c | 88 ++++++++++++-------\n src/backend/utils/misc/guc.c | 11 +++\n src/backend/utils/misc/postgresql.conf.sample | 2 +\n src/include/storage/pg_shmem.h | 1 +\n 6 files changed, 120 insertions(+), 48 deletions(-)\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex aca8f73a50..6177b819ce 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -1582,6 +1582,31 @@ include_dir 'conf.d'\n </listitem>\n </varlistentry>\n \n+ <varlistentry id=\"guc-huge-page-size\" xreflabel=\"huge_page_size\">\n+ <term><varname>huge_page_size</varname> (<type>integer</type>)\n+ <indexterm>\n+ <primary><varname>huge_page_size</varname> configuration parameter</primary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Controls what size of huge pages is used in conjunction with\n+ <xref linkend=\"guc-huge-pages\"/>.\n+ The default is zero (<literal>0</literal>).\n+ When set to <literal>0</literal>, the default huge page size on the system will\n+ be used.\n+ </para>\n+ <para>\n+ Most modern linux systems support <literal>2MB</literal> and <literal>1GB</literal>\n+ huge pages, and some architectures supports other sizes as well. For more information\n+ on how to check for support and usage, see <xref linkend=\"linux-huge-pages\"/>.\n+ </para>\n+ <para>\n+ Controling huge page size is not supported on Windows. \n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n <varlistentry id=\"guc-temp-buffers\" xreflabel=\"temp_buffers\">\n <term><varname>temp_buffers</varname> (<type>integer</type>)\n <indexterm>\ndiff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml\nindex 88210c4a5d..cbdbcb4fdf 100644\n--- a/doc/src/sgml/runtime.sgml\n+++ b/doc/src/sgml/runtime.sgml\n@@ -1391,41 +1391,50 @@ export PG_OOM_ADJUST_VALUE=0\n using large values of <xref linkend=\"guc-shared-buffers\"/>. To use this\n feature in <productname>PostgreSQL</productname> you need a kernel\n with <varname>CONFIG_HUGETLBFS=y</varname> and\n- <varname>CONFIG_HUGETLB_PAGE=y</varname>. You will also have to adjust\n- the kernel setting <varname>vm.nr_hugepages</varname>. To estimate the\n- number of huge pages needed, start <productname>PostgreSQL</productname>\n- without huge pages enabled and check the\n- postmaster's anonymous shared memory segment size, as well as the system's\n- huge page size, using the <filename>/proc</filename> file system. This might\n- look like:\n+ <varname>CONFIG_HUGETLB_PAGE=y</varname>. You will also have to pre-allocate\n+ huge pages with the the desired huge page size. To estimate the number of\n+ huge pages needed, start <productname>PostgreSQL</productname> without huge\n+ pages enabled and check the postmaster's anonymous shared memory segment size,\n+ as well as the system's supported huge page sizes, using the\n+ <filename>/sys</filename> file system. This might look like:\n <programlisting>\n $ <userinput>head -1 $PGDATA/postmaster.pid</userinput>\n 4170\n $ <userinput>pmap 4170 | awk '/rw-s/ && /zero/ {print $2}'</userinput>\n 6490428K\n+$ <userinput>ls /sys/kernel/mm/hugepages</userinput>\n+hugepages-1048576kB hugepages-2048kB\n+</programlisting>\n+\n+ You can now choose between the supported sizes, 2MiB and 1GiB in this case.\n+ By default <productname>PostgreSQL</productname> will use the default huge\n+ page size on the system, but that can be configured via\n+ <xref linkend=\"guc-huge-page-size\"/>.\n+ The default huge page size can be found with:\n+<programlisting>\n $ <userinput>grep ^Hugepagesize /proc/meminfo</userinput>\n Hugepagesize: 2048 kB\n </programlisting>\n+\n+ For <literal>2MiB</literal>,\n <literal>6490428</literal> / <literal>2048</literal> gives approximately\n <literal>3169.154</literal>, so in this example we need at\n least <literal>3170</literal> huge pages, which we can set with:\n <programlisting>\n-$ <userinput>sysctl -w vm.nr_hugepages=3170</userinput>\n+$ <userinput>echo 3170 | tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages</userinput>\n </programlisting>\n A larger setting would be appropriate if other programs on the machine\n- also need huge pages. Don't forget to add this setting\n- to <filename>/etc/sysctl.conf</filename> so that it will be reapplied\n- after reboots.\n+ also need huge pages. It is also possible to pre allocate huge pages on boot\n+ by adding the kernel parameters <literal>hugepagesz=2M hugepages=3170</literal>.\n </para>\n \n <para>\n Sometimes the kernel is not able to allocate the desired number of huge\n- pages immediately, so it might be necessary to repeat the command or to\n- reboot. (Immediately after a reboot, most of the machine's memory\n- should be available to convert into huge pages.) To verify the huge\n- page allocation situation, use:\n+ pages immediately due to external fragmentation, so it might be necessary to\n+ repeat the command or to reboot. To verify the huge page allocation situation\n+ for a given size, use:\n <programlisting>\n-$ <userinput>grep Huge /proc/meminfo</userinput>\n+$ <userinput>cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages</userinput>\n </programlisting>\n </para>\n \ndiff --git a/src/backend/port/sysv_shmem.c b/src/backend/port/sysv_shmem.c\nindex 198a6985bf..56419417dc 100644\n--- a/src/backend/port/sysv_shmem.c\n+++ b/src/backend/port/sysv_shmem.c\n@@ -32,6 +32,7 @@\n #endif\n \n #include \"miscadmin.h\"\n+#include \"port/pg_bitutils.h\"\n #include \"portability/mem.h\"\n #include \"storage/dsm.h\"\n #include \"storage/fd.h\"\n@@ -466,53 +467,76 @@ PGSharedMemoryAttach(IpcMemoryId shmId,\n *\n * Returns the (real or assumed) page size into *hugepagesize,\n * and the hugepage-related mmap flags to use into *mmap_flags.\n- *\n- * Currently *mmap_flags is always just MAP_HUGETLB. Someday, on systems\n- * that support it, we might OR in additional bits to specify a particular\n- * non-default huge page size.\n */\n+\n+\n static void\n GetHugePageSize(Size *hugepagesize, int *mmap_flags)\n {\n-\t/*\n-\t * If we fail to find out the system's default huge page size, assume it\n-\t * is 2MB. This will work fine when the actual size is less. If it's\n-\t * more, we might get mmap() or munmap() failures due to unaligned\n-\t * requests; but at this writing, there are no reports of any non-Linux\n-\t * systems being picky about that.\n-\t */\n-\t*hugepagesize = 2 * 1024 * 1024;\n-\t*mmap_flags = MAP_HUGETLB;\n+\tif (huge_page_size != 0)\n+\t{\n+\t\t/* If huge page size is provided in in config we use that size */\n+\t\t*hugepagesize = (Size) huge_page_size * 1024;\n+\t}\n+\telse\n+\t{\n+\t\t/*\n+\t\t * If we fail to find out the system's default huge page size, or no\n+\t\t * huge page size is provided in config, assume it is 2MB. This will\n+\t\t * work fine when the actual size is less. If it's more, we might get\n+\t\t * mmap() or munmap() failures due to unaligned requests; but at this\n+\t\t * writing, there are no reports of any non-Linux systems being picky\n+\t\t * about that.\n+\t\t */\n+\t\t*hugepagesize = 2 * 1024 * 1024;\n \n-\t/*\n-\t * System-dependent code to find out the default huge page size.\n-\t *\n-\t * On Linux, read /proc/meminfo looking for a line like \"Hugepagesize:\n-\t * nnnn kB\". Ignore any failures, falling back to the preset default.\n-\t */\n+\t\t/*\n+\t\t * System-dependent code to find out the default huge page size.\n+\t\t *\n+\t\t * On Linux, read /proc/meminfo looking for a line like \"Hugepagesize:\n+\t\t * nnnn kB\". Ignore any failures, falling back to the preset default.\n+\t\t */\n #ifdef __linux__\n-\t{\n-\t\tFILE\t *fp = AllocateFile(\"/proc/meminfo\", \"r\");\n-\t\tchar\t\tbuf[128];\n-\t\tunsigned int sz;\n-\t\tchar\t\tch;\n \n-\t\tif (fp)\n \t\t{\n-\t\t\twhile (fgets(buf, sizeof(buf), fp))\n+\t\t\tFILE\t *fp = AllocateFile(\"/proc/meminfo\", \"r\");\n+\t\t\tchar\t\tbuf[128];\n+\t\t\tunsigned int sz;\n+\t\t\tchar\t\tch;\n+\n+\t\t\tif (fp)\n \t\t\t{\n-\t\t\t\tif (sscanf(buf, \"Hugepagesize: %u %c\", &sz, &ch) == 2)\n+\t\t\t\twhile (fgets(buf, sizeof(buf), fp))\n \t\t\t\t{\n-\t\t\t\t\tif (ch == 'k')\n+\t\t\t\t\tif (sscanf(buf, \"Hugepagesize: %u %c\", &sz, &ch) == 2)\n \t\t\t\t\t{\n-\t\t\t\t\t\t*hugepagesize = sz * (Size) 1024;\n-\t\t\t\t\t\tbreak;\n+\t\t\t\t\t\tif (ch == 'k')\n+\t\t\t\t\t\t{\n+\t\t\t\t\t\t\t*hugepagesize = sz * (Size) 1024;\n+\t\t\t\t\t\t\tbreak;\n+\t\t\t\t\t\t}\n+\t\t\t\t\t\t/* We could accept other units besides kB, if needed */\n \t\t\t\t\t}\n-\t\t\t\t\t/* We could accept other units besides kB, if needed */\n \t\t\t\t}\n+\t\t\t\tFreeFile(fp);\n \t\t\t}\n-\t\t\tFreeFile(fp);\n \t\t}\n+#endif\t\t\t\t\t\t\t/* __linux__ */\n+\t}\n+\n+\t*mmap_flags = MAP_HUGETLB;\n+\n+\t/*\n+\t * System-dependent code to configure mmap_flags.\n+\t *\n+\t * On Linux, configure flags to include page size, since default huge page\n+\t * size will be used in case no size is provided.\n+\t */\n+#ifdef __linux__\n+\t{\n+\t\tint\t\t\tshift = pg_ceil_log2_64(*hugepagesize);\n+\n+\t\t*mmap_flags |= (shift & MAP_HUGE_MASK) << MAP_HUGE_SHIFT;\n \t}\n #endif\t\t\t\t\t\t\t/* __linux__ */\n }\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 2f3e0a70e0..b482c660cf 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -585,6 +585,7 @@ int\t\t\tssl_renegotiation_limit;\n * need to be duplicated in all the different implementations of pg_shmem.c.\n */\n int\t\t\thuge_pages;\n+int\t\t\thuge_page_size;\n \n /*\n * These variables are all dummies that don't do anything, except in some\n@@ -2269,6 +2270,16 @@ static struct config_int ConfigureNamesInt[] =\n \t\t1024, 16, INT_MAX / 2,\n \t\tNULL, NULL, NULL\n \t},\n+\t{\n+\t\t{\"huge_page_size\", PGC_POSTMASTER, RESOURCES_MEM,\n+\t\t\tgettext_noop(\"The size of huge page that should be used.\"),\n+\t\t\tNULL,\n+\t\t\tGUC_UNIT_KB\n+\t\t},\n+\t\t&huge_page_size,\n+\t\t0, 0, INT_MAX,\n+\t\tNULL, NULL, NULL\n+\t},\n \n \t{\n \t\t{\"temp_buffers\", PGC_USERSET, RESOURCES_MEM,\ndiff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\nindex ac02bd0c00..750d3f6245 100644\n--- a/src/backend/utils/misc/postgresql.conf.sample\n+++ b/src/backend/utils/misc/postgresql.conf.sample\n@@ -122,6 +122,8 @@\n \t\t\t\t\t# (change requires restart)\n #huge_pages = try\t\t\t# on, off, or try\n \t\t\t\t\t# (change requires restart)\n+#huge_page_size = 0\t\t\t# use defualt huge page size when set to zero\n+\t\t\t\t\t# (change requires restart)\n #temp_buffers = 8MB\t\t\t# min 800kB\n #max_prepared_transactions = 0\t\t# zero disables the feature\n \t\t\t\t\t# (change requires restart)\ndiff --git a/src/include/storage/pg_shmem.h b/src/include/storage/pg_shmem.h\nindex 0de26b3427..9992932a00 100644\n--- a/src/include/storage/pg_shmem.h\n+++ b/src/include/storage/pg_shmem.h\n@@ -44,6 +44,7 @@ typedef struct PGShmemHeader\t/* standard header for all Postgres shmem */\n /* GUC variables */\n extern int\tshared_memory_type;\n extern int\thuge_pages;\n+extern int\thuge_page_size;\n \n /* Possible values for huge_pages */\n typedef enum\n-- \n2.27.0\n\n\n\n",
"msg_date": "Mon, 8 Jun 2020 17:46:39 +0200",
"msg_from": "Odin Ugedal <odin@ugedal.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 4:13 AM Odin Ugedal <odin@ugedal.com> wrote:\n> This adds support for using non-default huge page sizes for shared\n> memory. This is achived via the new \"huge_page_size\" config entry.\n> The config value defaults to 0, meaning it will use the system default.\n> ---\n>\n> This would be very helpful when running in kubernetes since nodes may\n> support multiple huge page sizes, and have pre-allocated huge page meory\n> for each size. This lets the user select huge page size without having\n> to change the default huge page size on the node. This will also be\n> useful when doing benchmarking with different huge page sizes, since it\n> wouldn't require a full system reboot.\n\n+1\n\n> Since the default value of the new config is 0 (resulting in using the\n> default huge page size) this should be backwards compatible with old\n> configs.\n\n+1\n\n> Feel free to comment on the phrasing (both in docs and code) and on the\n> overall change.\n\nThis change seems good to me, because it will make testing easier and\ncertain mixed page size configurations possible. I haven't tried your\npatch yet; I'll take it for a spin when I'm benchmarking some other\nrelevant stuff soon.\n\nMinor comments on wording:\n\n> + <para>\n> + Most modern linux systems support <literal>2MB</literal> and <literal>1GB</literal>\n> + huge pages, and some architectures supports other sizes as well. For more information\n> + on how to check for support and usage, see <xref linkend=\"linux-huge-pages\"/>.\n\nLinux with a capital L. Hmm, I don't especially like saying \"Most\nmodern Linux systems\" as code for Intel. I wonder if we should\ninstead say something like: \"Some commonly available page sizes on\nmodern 64 bit server architectures include: <literal>2MB<literal> and\n<literal>1GB</literal> (Intel and AMD), <literal>16MB</literal> and\n<literal>16GB</literal> (IBM POWER), and ... (ARM).\"\n\n> + </para>\n> + <para>\n> + Controling huge page size is not supported on Windows.\n\nControlling\n\nJust by the way, some googling is telling me that very recent versions\nof Windows *could* support this (search keywords:\n\"NtAllocateVirtualMemoryEx 1GB\"), so that could be a project for\nsomeone who understands Windows to look into later.\n\n\n",
"msg_date": "Tue, 9 Jun 2020 10:26:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "Hi,\n\nThank you so much for the feedback David and Thomas!\n\nAttached v2 of patch, updated with the comments from Thomas (again,\nthanks). I also changed the mmap flags to only set size if the\nselected huge page size is not the default on (on linux). The support\nfor this functionality was added in Linux 3.8, and therefore it was\nnot supported before then. Should we add that to the docs, or what do\nyou think? The definitions of MAP_HUGE_MASK and MAP_HUGE_SHIFT were\nadded in Linux 3.8 too, but since they are a part of libc/musl, and\nare \"used\" at compile time, that shouldn't be a problem, or?\n\nIf a huge page size that is not supported on the system is chosen via\nhuge_page_size (and huge_pages = on), it will result in \"FATAL: could\nnot map anonymous shared memory: Invalid argument\". This is the same\nthat happens today when huge pages aren't supported at all, so I guess\nit is ok for now (and then we can consider verifying that it is\nsupported at a later stage).\n\nAlso, thanks for the information about the Windows. Have been\nsearching about info on huge pages in windows and \"superpages\" in bsd,\nwithout that much luck. I only have experience on linux, so I think we\ncan do as you said, to let someone else look at it. :)\n\nOdin",
"msg_date": "Tue, 9 Jun 2020 16:24:24 +0200",
"msg_from": "Odin Ugedal <odin@ugedal.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "On Wed, Jun 10, 2020 at 2:24 AM Odin Ugedal <odin@ugedal.com> wrote:\n> Attached v2 of patch, updated with the comments from Thomas (again,\n> thanks). I also changed the mmap flags to only set size if the\n> selected huge page size is not the default on (on linux). The support\n> for this functionality was added in Linux 3.8, and therefore it was\n> not supported before then. Should we add that to the docs, or what do\n> you think? The definitions of MAP_HUGE_MASK and MAP_HUGE_SHIFT were\n> added in Linux 3.8 too, but since they are a part of libc/musl, and\n> are \"used\" at compile time, that shouldn't be a problem, or?\n\nOh, so maybe we need a configure test for them? And if you don't have\nit, a runtime error if you try to set the page size to something other\nthan 0 (like we do for effective_io_concurrency if you don't have a\nposix_fadvise() function).\n\n> If a huge page size that is not supported on the system is chosen via\n> huge_page_size (and huge_pages = on), it will result in \"FATAL: could\n> not map anonymous shared memory: Invalid argument\". This is the same\n> that happens today when huge pages aren't supported at all, so I guess\n> it is ok for now (and then we can consider verifying that it is\n> supported at a later stage).\n\nIf you set it to an unsupported size, that seems reasonable to me. If\nyou set it to an unsupported size and have huge_pages=try, do we fall\nback to using no huge pages?\n\n> Also, thanks for the information about the Windows. Have been\n> searching about info on huge pages in windows and \"superpages\" in bsd,\n> without that much luck. I only have experience on linux, so I think we\n> can do as you said, to let someone else look at it. :)\n\nFor what it's worth, here's what I know about this on other operating systems:\n\n1. AIX can do huge pages, but only if you use System V shared memory\n(not for mmap() anonymous shared). In\nhttps://commitfest.postgresql.org/25/1960/ we got as far as adding\nsupport for shared_memory_type=sysv, but to go further we'll need\nsomeone willing to hack on the patch on an AIX system, preferably with\nroot access so they can grant the postgres user wired memory\nprivileges (or whatever they call that over there). But at a glance,\nthey don't have a way to ask for a specific page size, just \"large\".\n\n2. FreeBSD doesn't currently have a way to ask for super pages\nexplicitly at all; it does something like Linux Transparent Huge\nPages, except that it's transparent. It does seem to do a pretty good\njob of putting PostgreSQL text/code, shared memory and heap memory\ninto super pages automatically on my systems. One small detail is\nthat there is a flag MAP_ALIGNED_SUPER that might help get better\nalignment; it'd be bad if the lower pages of our shared memory\nhappened to be the location of lock arrays, proc array, buffer mapping\nor other largish and very hot stuff and also happened to be on 4kb\npages due to misalignment stuff, but I wonder if the flag is really\nneeded to avoid that on current FreeBSD or not. I should probably go\nand check some time! (I have no clue for other BSDs.)\n\n3. Last time I checked, Solaris and illumos seemed to have the same\nphilosophy as FreeBSD and not give you explicit control; my info could\nbe out of date, and I have no clue beyond that.\n\n4. What I said above about Windows; the explicit page size thing\nseems to be bleeding edge and barely documented.\n\n5. macOS does have flags to ask for super pages with various sizes,\nbut apparently such mappings are not inherited by child processes. So\nthat's useless for us.\n\nAs for the relevance of all this to your patch, I think we just need a\ncheck callback for the GUC, that says \"ERROR: huge_page_size must be\nset to 0 on this platform\".\n\n\n",
"msg_date": "Wed, 10 Jun 2020 17:11:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "On Wed, Jun 10, 2020 at 5:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> 3. Last time I checked, Solaris and illumos seemed to have the same\n> philosophy as FreeBSD and not give you explicit control; my info could\n> be out of date, and I have no clue beyond that.\n\nAh, I was wrong about that one: memcntl(2) looks highly relevant, but\nI'm not planning to look into that myself.\n\n\n",
"msg_date": "Wed, 10 Jun 2020 17:42:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "Thanks again Thomas,\n\n> Oh, so maybe we need a configure test for them? And if you don't have\n> it, a runtime error if you try to set the page size to something other\n> than 0 (like we do for effective_io_concurrency if you don't have a\n> posix_fadvise() function).\n\nAhh, yes, that sounds reasonable. Did some fiddling with the configure\nscript to add a check, and think I got it right (but not 100% sure\ntho.). Added new v3 patch.\n\n> If you set it to an unsupported size, that seems reasonable to me. If\n> you set it to an unsupported size and have huge_pages=try, do we fall\n> back to using no huge pages?\n\nYes, the \"fallback\" with huge_pages=try is the same for both\nhuge_page_size=0 and huge_page_size=nMB, and is the same as without\nthis patch.\n\n> For what it's worth, here's what I know about this on other operating systems:\n\nThanks for all the background info!\n\n> 1. AIX can do huge pages, but only if you use System V shared memory\n> (not for mmap() anonymous shared). In\n> https://commitfest.postgresql.org/25/1960/ we got as far as adding\n> support for shared_memory_type=sysv, but to go further we'll need\n> someone willing to hack on the patch on an AIX system, preferably with\n> root access so they can grant the postgres user wired memory\n> privileges (or whatever they call that over there). But at a glance,\n> they don't have a way to ask for a specific page size, just \"large\".\n\nInteresting. I might get access to some AIX systems at university this fall,\nso maybe I will get some time to dive into the patch.\n\n\nOdin",
"msg_date": "Wed, 10 Jun 2020 13:45:02 +0200",
"msg_from": "Odin Ugedal <odin@ugedal.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "Hi Odin,\n\nDocumentation syntax error \"<literal>2MB<literal>\" shows up as:\n\nconfig.sgml:1605: parser error : Opening and ending tag mismatch:\nliteral line 1602 and para\n </para>\n ^\n\nPlease install the documentation tools\nhttps://www.postgresql.org/docs/devel/docguide-toolsets.html, rerun\nconfigure and \"make docs\" to see these kinds of errors.\n\nThe build is currently failing on Windows:\n\nundefined symbol: HAVE_DECL_MAP_HUGE_MASK at src/include/pg_config.h\nline 143 at src/tools/msvc/Mkvcbuild.pm line 851.\n\nI think that's telling us that you need to add this stuff into\nsrc/tools/msvc/Solution.pm, so that we can say it doesn't have it. I\ndon't have Windows but whenever you post a new version we'll see if\nWindows likes it here:\n\nhttp://cfbot.cputube.org/odin-ugedal.html\n\nWhen using huge_pages=on, huge_page_size=1GB, but default\nshared_buffers, I noticed that the error message reports the wrong\n(unrounded) size in this message:\n\n2020-06-18 02:06:30.407 UTC [73552] HINT: This error usually means\nthat PostgreSQL's request for a shared memory segment exceeded\navailable memory, swap space, or huge pages. To reduce the request\nsize (currently 149069824 bytes), reduce PostgreSQL's shared memory\nusage, perhaps by reducing shared_buffers or max_connections.\n\nThe request size was actually:\n\nmmap(NULL, 1073741824, PROT_READ|PROT_WRITE,\nMAP_SHARED|MAP_ANONYMOUS|MAP_HUGETLB|30<<MAP_HUGE_SHIFT, -1, 0) = -1\nENOMEM (Cannot allocate memory)\n\n1GB pages are so big that it becomes a little tricky to set shared\nbuffers large enough without wasting RAM. What I mean is, if I want\nto use shared_buffers=16GB, I need to have at least 17 huge pages\navailable, but the 17th page is nearly entirely wasted! Imagine that\non POWER 16GB pages. That makes me wonder if we should actually\nredefine these GUCs differently so that you state the total, or at\nleast use the rounded memory for buffers... I think we could consider\nthat to be a separate problem with a separate patch though.\n\nJust for fun, I compared 4KB, 2MB and 1GB pages for a hash join of a\n3.5GB table against itself. Hash joins are the perfect way to\nexercise the TLB because they're very likely to miss. I also applied\nmy patch[1] to allow parallel queries to use shared memory from the\nmain shared memory area, so that they benefit from the configured page\nsize, using pages that are allocated once at start up. (Without that,\nyou'd have to mess around with /dev/shm mount options, and then hope\nthat pages were available at query time, and it'd also be slower for\nother stupid implementation reasons).\n\n# echo never > /sys/kernel/mm/transparent_hugepage/enabled\n# echo 8500 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages\n# echo 17 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages\n\nshared_buffers=8GB\ndynamic_shared_memory_main_size=8GB\n\ncreate table t as select generate_series(1, 100000000)::int i;\nalter table t set (parallel_workers = 7);\ncreate extension pg_prewarm;\nselect pg_prewarm('t');\nset max_parallel_workers_per_gather=7;\nset work_mem='1GB';\n\nselect count(*) from t t1 join t t2 using (i);\n\n4KB pages: 12.42 seconds\n2MB pages: 9.12 seconds\n1GB pages: 9.07 seconds\n\nUnfortunately I can't access the TLB miss counters on this system due\nto virtualisation restrictions, and the systems where I can don't have\n1GB pages. According to cpuid(1) this system has a fairly typical\nsetup:\n\n cache and TLB information (2):\n 0x63: data TLB: 2M/4M pages, 4-way, 32 entries\n data TLB: 1G pages, 4-way, 4 entries\n 0x03: data TLB: 4K pages, 4-way, 64 entries\n\nThis operation is touching about 8GB of data (scanning 3.5GB of table,\nbuilding a 4.5GB hash table) so 4 x 1GB is not enough do this without\nTLB misses.\n\nLet's try that again, except this time with shared_buffers=4GB,\ndynamic_shared_memory_main_size=4GB, and only half as many tuples in\nt, so it ought to fit:\n\n4KB pages: 6.37 seconds\n2MB pages: 4.96 seconds\n1GB pages: 5.07 seconds\n\nWell that's disappointing. I wondered if this was something to do\nwith NUMA effects on this two node box, so I tried running that again\nwith postgres under numactl --cpunodebind 0 --membind 0 and I got:\n\n4KB pages: 5.43 seconds\n2MB pages: 4.05 seconds\n1GB pages: 4.00 seconds\n\n From this I can't really conclude that it's terribly useful to use\nlarger page sizes, but it's certainly useful to have the ability to do\nfurther testing using the proposed GUC.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGLAE2QBv-WgGp%2BD9P_J-%3Dyne3zof9nfMaqq1h3EGHFXYQ%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 18 Jun 2020 16:00:49 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "> Documentation syntax error \"<literal>2MB<literal>\" shows up as:\n\nOps, sorry, should be fixed now.\n\n> The build is currently failing on Windows:\n\nAhh, thanks. Looks like the Windows stuff isn't autogenerated, so\nmaybe this new patch works..\n\n> When using huge_pages=on, huge_page_size=1GB, but default\nshared_buffers, I noticed that the error message reports the wrong\n(unrounded) size in this message:\n\nAhh, yes, that is correct. Switched to printing the _real_ allocsize now!\n\n\n> 1GB pages are so big that it becomes a little tricky to set shared\nbuffers large enough without wasting RAM. What I mean is, if I want\nto use shared_buffers=16GB, I need to have at least 17 huge pages\navailable, but the 17th page is nearly entirely wasted! Imagine that\non POWER 16GB pages. That makes me wonder if we should actually\nredefine these GUCs differently so that you state the total, or at\nleast use the rounded memory for buffers... I think we could consider\nthat to be a separate problem with a separate patch though.\n\nYes, that is a good point! But as you say, I guess that fits better in\nanother patch.\n\n> Just for fun, I compared 4KB, 2MB and 1GB pages for a hash join of a\n3.5GB table against itself. [...]\n\nThanks for the results! Will look into your patch when I get time, but\nit certainly looks cool! I have a 4-node numa machine with ~100GiB of\nmemory and a single node numa machine, so i'll take some benchmarks\nwhen I get time!\n\n> I wondered if this was something to do\n> with NUMA effects on this two node box, so I tried running that again\n> with postgres under numactl --cpunodebind 0 --membind 0 and I got: [...]\n\nYes, making this \"properly\" numa aware to avoid/limit cross-numa\nmemory access is kinda tricky. When reserving huge pages they are\ndistributed more or less evenly between the nodes, and they can be\nfound by using `grep -R \"\"\n/sys/devices/system/node/node*/hugepages/hugepages-*/nr_hugepages`\n(can also be written to), so there _may_ be a chance that the huge\npages you got was on another node than 0 (due to the fact that there\nnot were enough), but that is just guessing.\n\ntor. 18. jun. 2020 kl. 06:01 skrev Thomas Munro <thomas.munro@gmail.com>:\n>\n> Hi Odin,\n>\n> Documentation syntax error \"<literal>2MB<literal>\" shows up as:\n>\n> config.sgml:1605: parser error : Opening and ending tag mismatch:\n> literal line 1602 and para\n> </para>\n> ^\n>\n> Please install the documentation tools\n> https://www.postgresql.org/docs/devel/docguide-toolsets.html, rerun\n> configure and \"make docs\" to see these kinds of errors.\n>\n> The build is currently failing on Windows:\n>\n> undefined symbol: HAVE_DECL_MAP_HUGE_MASK at src/include/pg_config.h\n> line 143 at src/tools/msvc/Mkvcbuild.pm line 851.\n>\n> I think that's telling us that you need to add this stuff into\n> src/tools/msvc/Solution.pm, so that we can say it doesn't have it. I\n> don't have Windows but whenever you post a new version we'll see if\n> Windows likes it here:\n>\n> http://cfbot.cputube.org/odin-ugedal.html\n>\n> When using huge_pages=on, huge_page_size=1GB, but default\n> shared_buffers, I noticed that the error message reports the wrong\n> (unrounded) size in this message:\n>\n> 2020-06-18 02:06:30.407 UTC [73552] HINT: This error usually means\n> that PostgreSQL's request for a shared memory segment exceeded\n> available memory, swap space, or huge pages. To reduce the request\n> size (currently 149069824 bytes), reduce PostgreSQL's shared memory\n> usage, perhaps by reducing shared_buffers or max_connections.\n>\n> The request size was actually:\n>\n> mmap(NULL, 1073741824, PROT_READ|PROT_WRITE,\n> MAP_SHARED|MAP_ANONYMOUS|MAP_HUGETLB|30<<MAP_HUGE_SHIFT, -1, 0) = -1\n> ENOMEM (Cannot allocate memory)\n>\n> 1GB pages are so big that it becomes a little tricky to set shared\n> buffers large enough without wasting RAM. What I mean is, if I want\n> to use shared_buffers=16GB, I need to have at least 17 huge pages\n> available, but the 17th page is nearly entirely wasted! Imagine that\n> on POWER 16GB pages. That makes me wonder if we should actually\n> redefine these GUCs differently so that you state the total, or at\n> least use the rounded memory for buffers... I think we could consider\n> that to be a separate problem with a separate patch though.\n>\n> Just for fun, I compared 4KB, 2MB and 1GB pages for a hash join of a\n> 3.5GB table against itself. Hash joins are the perfect way to\n> exercise the TLB because they're very likely to miss. I also applied\n> my patch[1] to allow parallel queries to use shared memory from the\n> main shared memory area, so that they benefit from the configured page\n> size, using pages that are allocated once at start up. (Without that,\n> you'd have to mess around with /dev/shm mount options, and then hope\n> that pages were available at query time, and it'd also be slower for\n> other stupid implementation reasons).\n>\n> # echo never > /sys/kernel/mm/transparent_hugepage/enabled\n> # echo 8500 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages\n> # echo 17 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages\n>\n> shared_buffers=8GB\n> dynamic_shared_memory_main_size=8GB\n>\n> create table t as select generate_series(1, 100000000)::int i;\n> alter table t set (parallel_workers = 7);\n> create extension pg_prewarm;\n> select pg_prewarm('t');\n> set max_parallel_workers_per_gather=7;\n> set work_mem='1GB';\n>\n> select count(*) from t t1 join t t2 using (i);\n>\n> 4KB pages: 12.42 seconds\n> 2MB pages: 9.12 seconds\n> 1GB pages: 9.07 seconds\n>\n> Unfortunately I can't access the TLB miss counters on this system due\n> to virtualisation restrictions, and the systems where I can don't have\n> 1GB pages. According to cpuid(1) this system has a fairly typical\n> setup:\n>\n> cache and TLB information (2):\n> 0x63: data TLB: 2M/4M pages, 4-way, 32 entries\n> data TLB: 1G pages, 4-way, 4 entries\n> 0x03: data TLB: 4K pages, 4-way, 64 entries\n>\n> This operation is touching about 8GB of data (scanning 3.5GB of table,\n> building a 4.5GB hash table) so 4 x 1GB is not enough do this without\n> TLB misses.\n>\n> Let's try that again, except this time with shared_buffers=4GB,\n> dynamic_shared_memory_main_size=4GB, and only half as many tuples in\n> t, so it ought to fit:\n>\n> 4KB pages: 6.37 seconds\n> 2MB pages: 4.96 seconds\n> 1GB pages: 5.07 seconds\n>\n> Well that's disappointing. I wondered if this was something to do\n> with NUMA effects on this two node box, so I tried running that again\n> with postgres under numactl --cpunodebind 0 --membind 0 and I got:\n>\n> 4KB pages: 5.43 seconds\n> 2MB pages: 4.05 seconds\n> 1GB pages: 4.00 seconds\n>\n> From this I can't really conclude that it's terribly useful to use\n> larger page sizes, but it's certainly useful to have the ability to do\n> further testing using the proposed GUC.\n>\n> [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGLAE2QBv-WgGp%2BD9P_J-%3Dyne3zof9nfMaqq1h3EGHFXYQ%40mail.gmail.com",
"msg_date": "Sun, 21 Jun 2020 21:51:11 +0200",
"msg_from": "Odin Ugedal <odin@ugedal.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-18 16:00:49 +1200, Thomas Munro wrote:\n> Unfortunately I can't access the TLB miss counters on this system due\n> to virtualisation restrictions, and the systems where I can don't have\n> 1GB pages. According to cpuid(1) this system has a fairly typical\n> setup:\n> \n> cache and TLB information (2):\n> 0x63: data TLB: 2M/4M pages, 4-way, 32 entries\n> data TLB: 1G pages, 4-way, 4 entries\n> 0x03: data TLB: 4K pages, 4-way, 64 entries\n\nHm. Doesn't that system have a second level of TLB (STLB) with more 1GB\nentries? I think there's some errata around what intel exposes via cpuid\naround this :(\n\nGuessing that this is a skylake server chip?\nhttps://en.wikichip.org/wiki/intel/microarchitectures/skylake_(server)#Memory_Hierarchy\n\n> [...] Additionally there is a unified L2 TLB (STLB)\n> [...] STLB\n> [...] 1 GiB page translations:\n> [...] 16 entries; 4-way set associative\n\n\n> This operation is touching about 8GB of data (scanning 3.5GB of table,\n> building a 4.5GB hash table) so 4 x 1GB is not enough do this without\n> TLB misses.\n\nI assume this uses 7 workers?\n\n\n> Let's try that again, except this time with shared_buffers=4GB,\n> dynamic_shared_memory_main_size=4GB, and only half as many tuples in\n> t, so it ought to fit:\n> \n> 4KB pages: 6.37 seconds\n> 2MB pages: 4.96 seconds\n> 1GB pages: 5.07 seconds\n> \n> Well that's disappointing.\n\nHm, I don't actually know the answer to this: If this actually uses\nmultiple workers, won't the fact that each has an independent page table\n(despite having overlapping contents) lead to there being fewer actually\navailable 1GB entries available? Obviously depends on how processes are\nscheduled (iirc hyperthreading shares dTLBs).\n\nMight be worth looking at whether there are cpu migrations or testing\nwith a single worker.\n\n\n> I wondered if this was something to do\n> with NUMA effects on this two node box, so I tried running that again\n> with postgres under numactl --cpunodebind 0 --membind 0 and I got:\n\n\n> 4KB pages: 5.43 seconds\n> 2MB pages: 4.05 seconds\n> 1GB pages: 4.00 seconds\n> \n> From this I can't really conclude that it's terribly useful to use\n> larger page sizes, but it's certainly useful to have the ability to do\n> further testing using the proposed GUC.\n\nDue to the low number of 1GB entries they're quite likely to be\nproblematic imo. Especially when there's more concurrent misses than\nthere are page table entries.\n\nI'm somewhat doubtful that it's useful to use 1GB entries for all of our\nshared memory when that's bigger than the maximum covered size. I\nsuspect that it'd better to use 1GB entries for some and smaller entries\nfor the rest of the memory.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 21 Jun 2020 13:55:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 7:51 AM Odin Ugedal <odin@ugedal.com> wrote:\n> Ahh, thanks. Looks like the Windows stuff isn't autogenerated, so\n> maybe this new patch works..\n\nOn second thoughts, it seemed like overkill to use configure just to\ndetect whether macros are defined, so I dropped that and used plain\nold #if defined(). I also did some minor proof-reading and editing on\nthe documentation and comments; I put back the bit about sysctl and\nsysctl.conf because I think that is still pretty useful to highlight\nfor people who just want to use the default size, along with the /sys\nmethod.\n\nPushed. Thanks for the patch! It's always nice to see notes like\nthis being removed:\n\n- * Currently *mmap_flags is always just MAP_HUGETLB. Someday, on systems\n- * that support it, we might OR in additional bits to specify a particular\n- * non-default huge page size.\n\nIn passing, I think GetHugePageSize() is a bit odd; it claims to have\na Linux-specific part and a generic part, and yet the whole thing is\nwrapped in #ifdef MAP_HUGETLB which is Linux-specific as far as I\nknow. But that's not this patch's fault.\n\nWe might want to consider removing the note about CONFIG_HUGETLB_PAGE\nfrom the manual; I'm not sure if kernels built without that stuff are\nstill roaming in the wild, or if it's another anachronysm due for\nremoval like commit c8be915a. I didn't do that today, though.\n\n\n",
"msg_date": "Fri, 17 Jul 2020 14:42:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for choosing huge page size"
}
] |
[
{
"msg_contents": "Hi,\n\nWe currently have\n *\tbool SpinLockFree(slock_t *lock)\n *\t\tTests if the lock is free. Returns true if free, false if locked.\n *\t\tThis does *not* change the state of the lock.\nand its underlying S_LOCK_FREE() operation:\n *\n *\tbool S_LOCK_FREE(slock_t *lock)\n *\t\tTests if the lock is free. Returns true if free, false if locked.\n *\t\tThis does *not* change the state of the lock.\n\nThey are currently unused and, as far as I can tell, have never been\nused outside test code /asserts. We also don't currently implement them\nin the spinlock fallback code:\n\nbool\ns_lock_free_sema(volatile slock_t *lock)\n{\n\t/* We don't currently use S_LOCK_FREE anyway */\n\telog(ERROR, \"spin.c does not support S_LOCK_FREE()\");\n\treturn false;\n}\n\n\nI also find the \"free\" in the name very confusing. Everytime I look at\nthem (which, I grant, is not that often), I have to think about what\nthey mean.\n\nThus: Let's just remove SpinLockFree() / S_LOCK_FREE()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Jun 2020 15:53:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Remove SpinLockFree() / S_LOCK_FREE()?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> We currently have\n> *\tbool SpinLockFree(slock_t *lock)\n> *\t\tTests if the lock is free. Returns true if free, false if locked.\n> *\t\tThis does *not* change the state of the lock.\n> [ which isn't used ]\n> Thus: Let's just remove SpinLockFree() / S_LOCK_FREE()?\n\nYeah. I think they were included in the original design on the\ntheory that we'd need 'em someday. But if we haven't found a use\nyet we probably never will. So +1 for narrowing the API a tad.\n\n(We'd lose some error checking ability in the S_LOCK_TEST code,\nbut probably that's not worth worrying about.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jun 2020 19:00:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove SpinLockFree() / S_LOCK_FREE()?"
}
] |
[
{
"msg_contents": "Hello,\n\nTwo recent failures show plan changes in RLS queries on master. Based\non nearby comments, the choice plan is being used to verify access (or\nlack of access) to row estimates, so I guess that means something\ncould be amiss here. (Or it could be due to the dropped UDP flaky\nstats problem, but then why in the same place twice, and why twice in\na week, only on master, and not for months before that?)\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-06-08%2002%3A58%3A03\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-06-06%2003%3A18%3A03\n\n\n",
"msg_date": "Tue, 9 Jun 2020 14:26:29 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Intermittent test plan change in \"privileges\" test on BF animal prion"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Two recent failures show plan changes in RLS queries on master.\n\nYeah. I assume this is related to 0c882e52a, but I'm not sure how.\nThe fact that we've only seen it on prion (which runs\n-DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE) is suggestive,\nbut it's not clear why those options would lead to unstable\nplanner estimates. I've been waiting to see if we start to get\nsimilar reports from the full-fledged CLOBBER_CACHE_ALWAYS critters.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jun 2020 22:54:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent test plan change in \"privileges\" test on BF animal\n prion"
},
{
"msg_contents": "On Tue, 9 Jun 2020 at 14:27, Thomas Munro <thomas.munro@gmail.com> wrote:\n> Two recent failures show plan changes in RLS queries on master. Based\n> on nearby comments, the choice plan is being used to verify access (or\n> lack of access) to row estimates, so I guess that means something\n> could be amiss here. (Or it could be due to the dropped UDP flaky\n> stats problem, but then why in the same place twice, and why twice in\n> a week, only on master, and not for months before that?)\n\nI see 0c882e52a did change the number of statistics targets on that\ntable. The first failure was on the commit directly after that one.\nI'm not sure what instability Tom meant when he wrote \"-- results\nbelow depend on having quite accurate stats for atest12\".\n\nIt does seem plausible, given how slow prion is that autovacuum might\nbe trigger after the manual vacuum somehow and building stats with\njust 1k buckets instead of 10k. 0936d1b6 made some changes to disable\nautovacuum because it was sometimes coming in and messing with the\nstatistics, maybe we need to do the same here, or at least do\nsomething less temporary than changing default_statistics_target.\n\nselect attname,array_length(histogram_bounds,1) from pg_stats where\ntablename = 'atest12' order by attname;\n\nshould mention the array length is 10000 if it's working as intended.\nIs it worth sticking that query in there before and after the failures\nto ensure we're working with the stats we think we are?\n\nDavid\n\n\n",
"msg_date": "Tue, 9 Jun 2020 15:28:20 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent test plan change in \"privileges\" test on BF animal\n prion"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I see 0c882e52a did change the number of statistics targets on that\n> table. The first failure was on the commit directly after that one.\n> I'm not sure what instability Tom meant when he wrote \"-- results\n> below depend on having quite accurate stats for atest12\".\n\nSee [1], particularly the para about \"When I went to test 0002\".\nAt least one of those test cases fails if the planner estimates more\nthan one row being selected by the user-defined operator, and since the\ntable has 10K rows, that means we need 1/10000 selectivity precision.\n\n> It does seem plausible, given how slow prion is that autovacuum might\n> be trigger after the manual vacuum somehow and building stats with\n> just 1k buckets instead of 10k.\n\nHmm ... that's a plausible theory, perhaps. I forget: does autovac\nrecheck, after acquiring the requisite table lock, whether the table\nstill needs to be processed?\n\n> 0936d1b6 made some changes to disable\n> autovacuum because it was sometimes coming in and messing with the\n> statistics, maybe we need to do the same here, or at least do\n> something less temporary than changing default_statistics_target.\n\nYeah, setting that as a table parameter seems like a better idea than\nsetting default_statistics_target.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jun 2020 23:41:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent test plan change in \"privileges\" test on BF animal\n prion"
},
{
"msg_contents": "On Tue, 9 Jun 2020 at 15:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > It does seem plausible, given how slow prion is that autovacuum might\n> > be trigger after the manual vacuum somehow and building stats with\n> > just 1k buckets instead of 10k.\n>\n> Hmm ... that's a plausible theory, perhaps. I forget: does autovac\n> recheck, after acquiring the requisite table lock, whether the table\n> still needs to be processed?\n\nIt does, but I wondered if there was a window after the manual vacuum\nresets n_ins_since_vacuum and between when autovacuum looks at it.\n\nDavid\n\n\n",
"msg_date": "Tue, 9 Jun 2020 15:48:23 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent test plan change in \"privileges\" test on BF animal\n prion"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 9 Jun 2020 at 15:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... that's a plausible theory, perhaps. I forget: does autovac\n>> recheck, after acquiring the requisite table lock, whether the table\n>> still needs to be processed?\n\n> It does, but I wondered if there was a window after the manual vacuum\n> resets n_ins_since_vacuum and between when autovacuum looks at it.\n\nOh, there surely is, because of the lag in the stats collection mechanism.\nI'm trying to reproduce this now, but it's sounding pretty plausible.\n\nBTW, it looks like I managed to trim the reference off my prior message,\nbut I meant [1] to refer to\nhttps://www.postgresql.org/message-id/666679.1591138428%40sss.pgh.pa.us\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jun 2020 23:55:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent test plan change in \"privileges\" test on BF animal\n prion"
},
{
"msg_contents": "I wrote:\n> I'm trying to reproduce this now, but it's sounding pretty plausible.\n\nYeah, that's definitely it. I was able to reproduce the failure semi\nreliably (every two or three tries) after adding -DRELCACHE_FORCE_RELEASE\n-DCATCACHE_FORCE_RELEASE and inserting a \"pg_sleep(1)\" just after the\nmanual vacuum in privileges.sql; and as you'd guessed, the stats arrays\nwere just normal size in the failing runs. After disabling autovac on\nthe table, the failure went away.\n\nThanks for the insight!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jun 2020 01:22:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent test plan change in \"privileges\" test on BF animal\n prion"
}
] |
[
{
"msg_contents": "Hi,\n\npg_itoa, pg_ltoa and pg_lltoa all have access to the length of the\nstring that is produced in the function by way of the \"len\" variable.\nThese functions don't have a great deal of use in core, but it seems\nthat most callers do require the len but end up getting it via\nstrlen(). It seems we could optimise this a little if we just had the\nfunctions return the length instead of making callers do the work\nthemselves.\n\nThis allows us to speed up a few cases. int2vectorout() should be\nfaster and int8out() becomes a bit faster if we get rid of the\nstrdup() call and replace it with a palloc()/memcpy() call.\n\nThe slight drawback that I can see from this is that on testing\nint4out() it gets slightly slower, which I assume is because I'm now\nreturning the length, but there's no use for it in that function.\n\ncreate table bi (a bigint);\ninsert into bi select generate_Series(1,10000000);\nvacuum freeze analyze bi;\n\nbench.sql = copy bi to '/dev/null';\n\nBIGINT test\n\ndrowley@amd3990x:~$ pgbench -n -f bench.sql -T 120 postgres\n\nMaster: latency average = 1791.597 ms\nPatched: latency average = 1705.322 ms (95.184%)\n\nINT test\n\ncreate table i (a int);\ninsert into i select generate_Series(1,10000000);\nvacuum freeze analyze i;\n\nbench.sql = copy i to '/dev/null';\n\ndrowley@amd3990x:~$ pgbench -n -f bench.sql -T 120 postgres\n\nMaster: latency average = 1631.956 ms\nPatched: latency average = 1678.626 ms (102.859%)\n\nAs you can see, this squeezes about 5% extra out of a copy of a 10\nmillion row bigint table but costs us almost 3% on an equivalent int\ntable. A likely workaround for that is moving the functions into the\nheader file and making them static inline. It would be nice not to\nhave to do that though.\n\nThese tests were done on modern AMD hardware. I've not tested yet on\nanything intel based.\n\nI've copied in Andrew as I know he only recently rewrote these\nfunctions and Andres since he did mention this in [1].\n\nI'm interested to know if that int4out regression exists on other hardware.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20190920051857.2fhnvhvx4qdddviz@alap3.anarazel.de",
"msg_date": "Tue, 9 Jun 2020 18:53:06 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Speedup usages of pg_*toa() functions"
},
{
"msg_contents": ">>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n\n David> As you can see, this squeezes about 5% extra out of a copy of a\n David> 10 million row bigint table but costs us almost 3% on an\n David> equivalent int table.\n\nAnd once again I have to issue the reminder: you can have gains or\nlosses of several percent on microbenchmarks of this kind just by\ntouching unrelated pieces of code that are never used in the test. In\norder to demonstrate a consistent difference, you have to do each set of\ntests multiple times, with random amounts of padding added to some\nunrelated part of the code.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 09 Jun 2020 08:24:26 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "On Tue, 9 Jun 2020 at 19:24, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n>\n> >>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n>\n> David> As you can see, this squeezes about 5% extra out of a copy of a\n> David> 10 million row bigint table but costs us almost 3% on an\n> David> equivalent int table.\n>\n> And once again I have to issue the reminder: you can have gains or\n> losses of several percent on microbenchmarks of this kind just by\n> touching unrelated pieces of code that are never used in the test. In\n> order to demonstrate a consistent difference, you have to do each set of\n> tests multiple times, with random amounts of padding added to some\n> unrelated part of the code.\n\nThanks for the reminder.\n\nInstead of that, I tried with clang 10.0.0. I was previously using gcc 9.3.\n\nBIGINT test\n\nMaster: latency average = 1842.182 ms\nPatched: latency average = 1715.418 ms\n\nINT test\n\nMaster: latency average = 1650.583 ms\nPatched: latency average = 1617.783 ms\n\nThere's nothing in the patch that makes the INT test faster, so I\nguess that's noise. The BIGINT test is about 7.3% faster in this\ncase.\n\nDavid\n\n\n",
"msg_date": "Tue, 9 Jun 2020 20:07:23 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": ">>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n\n David> This allows us to speed up a few cases. int2vectorout() should\n David> be faster and int8out() becomes a bit faster if we get rid of\n David> the strdup() call and replace it with a palloc()/memcpy() call.\n\nWhat about removing the memcpy entirely? I don't think we save anything\nmuch useful here by pallocing the exact length, rather than doing what\nint4out does and palloc a fixed size and convert the int directly into\nit.\n\ni.e.\n\nDatum\nint8out(PG_FUNCTION_ARGS)\n{\n int64 val = PG_GETARG_INT64(0);\n char *result = palloc(MAXINT8LEN + 1);\n\n pg_lltoa(val, result);\n PG_RETURN_CSTRING(result);\n}\n\nFor pg_ltoa, etc., I don't like adding the extra call to pg_ultoa_n - at\nleast on my clang, that results in two copies of pg_ultoa_n inlined.\nHow about doing it like,\n\nint\npg_lltoa(int64 value, char *a)\n{\n int len = 0;\n uint64 uvalue = value;\n\n if (value < 0)\n {\n uvalue = (uint64) 0 - uvalue;\n a[len++] = '-';\n }\n len += pg_ulltoa_n(uvalue, a + len);\n a[len] = '\\0';\n return len;\n}\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 09 Jun 2020 11:08:34 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "On Tue, 9 Jun 2020 at 22:08, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n>\n> >>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n>\n> David> This allows us to speed up a few cases. int2vectorout() should\n> David> be faster and int8out() becomes a bit faster if we get rid of\n> David> the strdup() call and replace it with a palloc()/memcpy() call.\n>\n> What about removing the memcpy entirely? I don't think we save anything\n> much useful here by pallocing the exact length, rather than doing what\n> int4out does and palloc a fixed size and convert the int directly into\n> it.\n\nOn looking back through git blame, it seems int2out and int4out have\nbeen that way since at least 1996, before int8.c existed. int8out has\nbeen doing it since fa838876e9f -- Include 8-byte integer type. dated\n1998. Quite likely the larger than required palloc size back then was\nmore of a concern. So perhaps you're right about just doing it that\nway instead. With that and the ints I tested with, the int8\nperformance should be about aligned to int4 performance.\n\n> For pg_ltoa, etc., I don't like adding the extra call to pg_ultoa_n - at\n> least on my clang, that results in two copies of pg_ultoa_n inlined.\n> How about doing it like,\n>\n> int\n> pg_lltoa(int64 value, char *a)\n> {\n> int len = 0;\n> uint64 uvalue = value;\n>\n> if (value < 0)\n> {\n> uvalue = (uint64) 0 - uvalue;\n> a[len++] = '-';\n> }\n> len += pg_ulltoa_n(uvalue, a + len);\n> a[len] = '\\0';\n> return len;\n> }\n\nAgreed, that seems better.\n\nDavid\n\n\n",
"msg_date": "Tue, 9 Jun 2020 22:54:45 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "Em ter., 9 de jun. de 2020 às 07:55, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 9 Jun 2020 at 22:08, Andrew Gierth <andrew@tao11.riddles.org.uk>\n> wrote:\n> >\n> > >>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n> >\n> > David> This allows us to speed up a few cases. int2vectorout() should\n> > David> be faster and int8out() becomes a bit faster if we get rid of\n> > David> the strdup() call and replace it with a palloc()/memcpy() call.\n> >\n> > What about removing the memcpy entirely? I don't think we save anything\n> > much useful here by pallocing the exact length, rather than doing what\n> > int4out does and palloc a fixed size and convert the int directly into\n> > it.\n>\n> On looking back through git blame, it seems int2out and int4out have\n> been that way since at least 1996, before int8.c existed. int8out has\n> been doing it since fa838876e9f -- Include 8-byte integer type. dated\n> 1998. Quite likely the larger than required palloc size back then was\n> more of a concern. So perhaps you're right about just doing it that\n> way instead. With that and the ints I tested with, the int8\n> performance should be about aligned to int4 performance.\n>\n> > For pg_ltoa, etc., I don't like adding the extra call to pg_ultoa_n - at\n> > least on my clang, that results in two copies of pg_ultoa_n inlined.\n> > How about doing it like,\n> >\n> > int\n> > pg_lltoa(int64 value, char *a)\n> > {\n> > int len = 0;\n> > uint64 uvalue = value;\n> >\n> > if (value < 0)\n> > {\n> > uvalue = (uint64) 0 - uvalue;\n> > a[len++] = '-';\n> > }\n> > len += pg_ulltoa_n(uvalue, a + len);\n> > a[len] = '\\0';\n> > return len;\n> > }\n>\n> Written like that, wouldn't it get better?\n\nint\npg_lltoa(int64 value, char *a)\n{\n if (value < 0)\n {\n int len = 0;\n uint64 uvalue = (uint64) 0 - uvalue;\n\n a[len++] = '-';\n len += pg_ulltoa_n(uvalue, a + len);\n a[len] = '\\0';\n return len;\n }\nelse\n return pg_ulltoa_n(value, a);\n}\n\nregards,\nRanier Vilela\n\nEm ter., 9 de jun. de 2020 às 07:55, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 9 Jun 2020 at 22:08, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n>\n> >>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n>\n> David> This allows us to speed up a few cases. int2vectorout() should\n> David> be faster and int8out() becomes a bit faster if we get rid of\n> David> the strdup() call and replace it with a palloc()/memcpy() call.\n>\n> What about removing the memcpy entirely? I don't think we save anything\n> much useful here by pallocing the exact length, rather than doing what\n> int4out does and palloc a fixed size and convert the int directly into\n> it.\n\nOn looking back through git blame, it seems int2out and int4out have\nbeen that way since at least 1996, before int8.c existed. int8out has\nbeen doing it since fa838876e9f -- Include 8-byte integer type. dated\n1998. Quite likely the larger than required palloc size back then was\nmore of a concern. So perhaps you're right about just doing it that\nway instead. With that and the ints I tested with, the int8\nperformance should be about aligned to int4 performance.\n\n> For pg_ltoa, etc., I don't like adding the extra call to pg_ultoa_n - at\n> least on my clang, that results in two copies of pg_ultoa_n inlined.\n> How about doing it like,\n>\n> int\n> pg_lltoa(int64 value, char *a)\n> {\n> int len = 0;\n> uint64 uvalue = value;\n>\n> if (value < 0)\n> {\n> uvalue = (uint64) 0 - uvalue;\n> a[len++] = '-';\n> }\n> len += pg_ulltoa_n(uvalue, a + len);\n> a[len] = '\\0';\n> return len;\n> }\nWritten like that, wouldn't it get better?intpg_lltoa(int64 value, char *a){ if (value < 0) { int len = 0; uint64 uvalue = (uint64) 0 - uvalue; a[len++] = '-'; len += pg_ulltoa_n(uvalue, a + len); a[len] = '\\0'; return len; }\telse return pg_ulltoa_n(value, a);} regards,Ranier Vilela",
"msg_date": "Tue, 9 Jun 2020 11:31:35 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": ">>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n\n Ranier> Written like that, wouldn't it get better?\n\n Ranier> int\n Ranier> pg_lltoa(int64 value, char *a)\n Ranier> {\n Ranier> if (value < 0)\n Ranier> {\n Ranier> int len = 0;\n Ranier> uint64 uvalue = (uint64) 0 - uvalue;\n Ranier> a[len++] = '-';\n Ranier> len += pg_ulltoa_n(uvalue, a + len);\n Ranier> a[len] = '\\0';\n Ranier> return len;\n Ranier> }\n Ranier> else\n Ranier> return pg_ulltoa_n(value, a);\n Ranier> }\n\nNo. While it doesn't matter so much for pg_lltoa since that's unlikely\nto inline multiple pg_ulltoa_n calls, if you do pg_ltoa like this it (a)\nends up with two copies of pg_ultoa_n inlined into it, and (b) you don't\nactually save any useful amount of time. Your version is also failing to\nadd the terminating '\\0' for the positive case and has other obvious\nbugs.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 09 Jun 2020 17:01:27 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "Em ter., 9 de jun. de 2020 às 13:01, Andrew Gierth <\nandrew@tao11.riddles.org.uk> escreveu:\n\n> >>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n>\n> Ranier> Written like that, wouldn't it get better?\n>\n> Ranier> int\n> Ranier> pg_lltoa(int64 value, char *a)\n> Ranier> {\n> Ranier> if (value < 0)\n> Ranier> {\n> Ranier> int len = 0;\n> Ranier> uint64 uvalue = (uint64) 0 - uvalue;\n> Ranier> a[len++] = '-';\n> Ranier> len += pg_ulltoa_n(uvalue, a + len);\n> Ranier> a[len] = '\\0';\n> Ranier> return len;\n> Ranier> }\n> Ranier> else\n> Ranier> return pg_ulltoa_n(value, a);\n> Ranier> }\n>\n> No. While it doesn't matter so much for pg_lltoa since that's unlikely\n> to inline multiple pg_ulltoa_n calls, if you do pg_ltoa like this it (a)\n> ends up with two copies of pg_ultoa_n inlined into it, and (b) you don't\n> actually save any useful amount of time. Your version is also failing to\n> add the terminating '\\0' for the positive case and has other obvious\n> bugs.\n>\n(a) Sorry, I'm not asm specialist.\n\n#include <stdio.h>\n\nint pg_utoa(unsigned int num, char * a) {\n int len;\n\n len = sprintf(a, \"%lu\", num);\n\n return len;\n}\n\nint pg_toa(int num, char * a)\n{\n if (num < 0) {\n int len;\n len = pg_utoa(num, a);\n a[len] = '\\0';\n return len;\n }\n else\n return pg_utoa(num, a);\n}\n\n\n.LC0:\n .string \"%lu\"\npg_utoa(unsigned int, char*):\n mov edx, edi\n xor eax, eax\n mov rdi, rsi\n mov esi, OFFSET FLAT:.LC0\n jmp sprintf\npg_toa(int, char*):\n push rbp\n test edi, edi\n mov rbp, rsi\n mov edx, edi\n mov esi, OFFSET FLAT:.LC0\n mov rdi, rbp\n mov eax, 0\n js .L7\n pop rbp\n jmp sprintf\n.L7:\n call sprintf\n movsx rdx, eax\n mov BYTE PTR [rbp+0+rdx], 0\n pop rbp\n ret\n\nWhere \" ends up with two copies of pg_ultoa_n inlined into it\", in this\nsimplified example?\n\n(b) I call this tail cut, I believe it saves time, for sure.\n\nRegarding bugs:\n\n(c) your version don't check size of a var, when pg_ulltoa_n\nwarning about \"least MAXINT8LEN bytes.\"\n\nSo in theory, I could blow it up, by calling pg_lltoa.\n\n(d) So I can't trust pg_ulltoa_n, when var a, is it big enough?\nIf not, there are more bugs.\n\nregards,\nRanier Vilela\n\nEm ter., 9 de jun. de 2020 às 13:01, Andrew Gierth <andrew@tao11.riddles.org.uk> escreveu:>>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n\n Ranier> Written like that, wouldn't it get better?\n\n Ranier> int\n Ranier> pg_lltoa(int64 value, char *a)\n Ranier> {\n Ranier> if (value < 0)\n Ranier> {\n Ranier> int len = 0;\n Ranier> uint64 uvalue = (uint64) 0 - uvalue;\n Ranier> a[len++] = '-';\n Ranier> len += pg_ulltoa_n(uvalue, a + len);\n Ranier> a[len] = '\\0';\n Ranier> return len;\n Ranier> }\n Ranier> else\n Ranier> return pg_ulltoa_n(value, a);\n Ranier> }\n\nNo. While it doesn't matter so much for pg_lltoa since that's unlikely\nto inline multiple pg_ulltoa_n calls, if you do pg_ltoa like this it (a)\nends up with two copies of pg_ultoa_n inlined into it, and (b) you don't\nactually save any useful amount of time. Your version is also failing to\nadd the terminating '\\0' for the positive case and has other obvious\nbugs.(a) Sorry, I'm not asm specialist.\n#include <stdio.h>int pg_utoa(unsigned int num, char * a) { int len; len = sprintf(a, \"%lu\", num); return len;}int pg_toa(int num, char * a){ if (num < 0) { int len; len = pg_utoa(num, a); a[len] = '\\0'; return len; } else return pg_utoa(num, a);}\n \n.LC0: .string \"%lu\"pg_utoa(unsigned int, char*): mov edx, edi xor eax, eax mov rdi, rsi mov esi, OFFSET FLAT:.LC0 jmp sprintfpg_toa(int, char*): push rbp test edi, edi mov rbp, rsi mov edx, edi mov esi, OFFSET FLAT:.LC0 mov rdi, rbp mov eax, 0 js .L7 pop rbp jmp sprintf.L7: call sprintf movsx rdx, eax mov BYTE PTR [rbp+0+rdx], 0 pop rbp ret\nWhere \"\nends up with two copies of pg_ultoa_n inlined into it\", in this simplified example?(b) I call this tail cut, I believe it saves time, for sure.Regarding bugs:(c) your version don't check size of a var, when pg_ulltoa_nwarning about \"least MAXINT8LEN bytes.\"So in theory, I could blow it up, by calling pg_lltoa.(d) So I can't trust pg_ulltoa_n, when var a, is it big enough?If not, there are more bugs.regards,Ranier Vilela",
"msg_date": "Tue, 9 Jun 2020 15:37:57 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": ">>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n\n Ranier> Where \" ends up with two copies of pg_ultoa_n inlined into it\",\n Ranier> in this simplified example?\n\nThe two references to sprintf are both inlined copies of your pg_utoa.\n\n Ranier> (b) I call this tail cut, I believe it saves time, for sure.\n\nYou seem to have missed the point that the pg_ultoa_n / pg_ulltoa_n\nfunctions DO NOT ADD A TRAILING NUL. Which means that pg_ltoa / pg_lltoa\ncan't just tail call them, since they must add the NUL after.\n\n Ranier> Regarding bugs:\n\n Ranier> (c) your version don't check size of a var, when pg_ulltoa_n\n Ranier> warning about \"least MAXINT8LEN bytes.\"\n\n Ranier> So in theory, I could blow it up, by calling pg_lltoa.\n\nNo. Callers of pg_lltoa are required to provide a buffer of at least\nMAXINT8LEN+1 bytes.\n\n-- \nAndrew.\n\n\n",
"msg_date": "Tue, 09 Jun 2020 19:52:59 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "Em ter., 9 de jun. de 2020 às 15:53, Andrew Gierth <\nandrew@tao11.riddles.org.uk> escreveu:\n\n> >>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n>\n> Ranier> Where \" ends up with two copies of pg_ultoa_n inlined into it\",\n> Ranier> in this simplified example?\n>\n> The two references to sprintf are both inlined copies of your pg_utoa.\n>\n> Ranier> (b) I call this tail cut, I believe it saves time, for sure.\n>\n> You seem to have missed the point that the pg_ultoa_n / pg_ulltoa_n\n> functions DO NOT ADD A TRAILING NUL. Which means that pg_ltoa / pg_lltoa\n> can't just tail call them, since they must add the NUL after.\n>\n> Ranier> Regarding bugs:\n>\n> Ranier> (c) your version don't check size of a var, when pg_ulltoa_n\n> Ranier> warning about \"least MAXINT8LEN bytes.\"\n>\n> Ranier> So in theory, I could blow it up, by calling pg_lltoa.\n>\n> No. Callers of pg_lltoa are required to provide a buffer of at least\n> MAXINT8LEN+1 bytes.\n>\nThanks for explanations.\n\nSo I would change, just the initialization (var uvalue), even though it is\nirrelevant.\n\nint\npg_lltoa(int64 value, char *a)\n{\nint len = 0;\nuint64 uvalue;\n\nif (value < 0)\n{\nuvalue = (uint64) 0 - uvalue;\n a[len++] = '-';\n}\nelse\nuvalue = value;\n\nlen += pg_ulltoa_n(uvalue, a + len);\na[len] = '\\0';\n\nreturn len;\n}\n\nregards,\nRanier Vilela\n\nEm ter., 9 de jun. de 2020 às 15:53, Andrew Gierth <andrew@tao11.riddles.org.uk> escreveu:>>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n\n Ranier> Where \" ends up with two copies of pg_ultoa_n inlined into it\",\n Ranier> in this simplified example?\n\nThe two references to sprintf are both inlined copies of your pg_utoa.\n\n Ranier> (b) I call this tail cut, I believe it saves time, for sure.\n\nYou seem to have missed the point that the pg_ultoa_n / pg_ulltoa_n\nfunctions DO NOT ADD A TRAILING NUL. Which means that pg_ltoa / pg_lltoa\ncan't just tail call them, since they must add the NUL after.\n\n Ranier> Regarding bugs:\n\n Ranier> (c) your version don't check size of a var, when pg_ulltoa_n\n Ranier> warning about \"least MAXINT8LEN bytes.\"\n\n Ranier> So in theory, I could blow it up, by calling pg_lltoa.\n\nNo. Callers of pg_lltoa are required to provide a buffer of at least\nMAXINT8LEN+1 bytes.Thanks for explanations.So I would change, just the initialization (var uvalue), even though it is irrelevant.intpg_lltoa(int64 value, char *a){\tint\t\t\tlen = 0;\tuint64\t\tuvalue;\tif (value < 0)\t{\t\tuvalue = (uint64) 0 - uvalue; a[len++] = '-';\t\t\t}\telse\t\tuvalue = value;\tlen += pg_ulltoa_n(uvalue, a + len);\ta[len] = '\\0'; \treturn len;} regards,Ranier Vilela",
"msg_date": "Tue, 9 Jun 2020 16:51:58 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": ">>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n\n Ranier> So I would change, just the initialization (var uvalue), even though it is\n Ranier> irrelevant.\n\n Ranier> int\n Ranier> pg_lltoa(int64 value, char *a)\n Ranier> {\n Ranier> int len = 0;\n Ranier> uint64 uvalue;\n\n Ranier> if (value < 0)\n Ranier> {\n Ranier> uvalue = (uint64) 0 - uvalue;\n\nUse of uninitialized variable.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 09 Jun 2020 21:42:45 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "Em ter., 9 de jun. de 2020 às 17:42, Andrew Gierth <\nandrew@tao11.riddles.org.uk> escreveu:\n\n> >>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n>\n> Ranier> So I would change, just the initialization (var uvalue), even\n> though it is\n> Ranier> irrelevant.\n>\n> Ranier> int\n> Ranier> pg_lltoa(int64 value, char *a)\n> Ranier> {\n> Ranier> int len = 0;\n> Ranier> uint64 uvalue;\n>\n> Ranier> if (value < 0)\n> Ranier> {\n> Ranier> uvalue = (uint64) 0 - uvalue;\n>\n> Use of uninitialized variable.\n>\nSorry, my mistake.\n\nuvalue = (uint64) 0 - value;\n\nregards,\nRanier Vilela\n\n\n>\n> --\n> Andrew (irc:RhodiumToad)\n>\n\nEm ter., 9 de jun. de 2020 às 17:42, Andrew Gierth <andrew@tao11.riddles.org.uk> escreveu:>>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n\n Ranier> So I would change, just the initialization (var uvalue), even though it is\n Ranier> irrelevant.\n\n Ranier> int\n Ranier> pg_lltoa(int64 value, char *a)\n Ranier> {\n Ranier> int len = 0;\n Ranier> uint64 uvalue;\n\n Ranier> if (value < 0)\n Ranier> {\n Ranier> uvalue = (uint64) 0 - uvalue;\n\nUse of uninitialized variable.Sorry, my mistake.uvalue = (uint64) 0 - value;regards,Ranier Vilela \n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Tue, 9 Jun 2020 18:50:07 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "On Tue, 9 Jun 2020 at 22:08, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n>\n> >>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n>\n> David> This allows us to speed up a few cases. int2vectorout() should\n> David> be faster and int8out() becomes a bit faster if we get rid of\n> David> the strdup() call and replace it with a palloc()/memcpy() call.\n>\n> What about removing the memcpy entirely? I don't think we save anything\n> much useful here by pallocing the exact length, rather than doing what\n> int4out does and palloc a fixed size and convert the int directly into\n> it.\n\nThe attached 0001 patch does this.\n\ncreate table bi (a bigint);\ninsert into bi select generate_Series(1,10000000);\nvacuum freeze analyze bi;\n\nquery = copy bi to '/dev/null';\n120 second pgbench run.\n\nThe results are:\n\nGCC master: latency average = 1757.556 ms\nGCC master+0001: latency average = 1588.793 ms (90.4%)\n\nclang master: latency average = 1818.952 ms\nclang master+0001: latency average = 1649.100 ms (90.6%)\n\n\n> For pg_ltoa, etc., I don't like adding the extra call to pg_ultoa_n - at\n> least on my clang, that results in two copies of pg_ultoa_n inlined.\n> How about doing it like,\n>\n> int\n> pg_lltoa(int64 value, char *a)\n> {\n> int len = 0;\n> uint64 uvalue = value;\n>\n> if (value < 0)\n> {\n> uvalue = (uint64) 0 - uvalue;\n> a[len++] = '-';\n> }\n> len += pg_ulltoa_n(uvalue, a + len);\n> a[len] = '\\0';\n> return len;\n> }\n\nThe 0002 patch does it this way.\n\nDavid",
"msg_date": "Wed, 10 Jun 2020 11:57:27 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": ">>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n\n Ranier> Sorry, my mistake.\n\n Ranier> uvalue = (uint64) 0 - value;\n\nThis doesn't gain anything over the original, and it has the downside of\nhiding an int64 to uint64 conversion that is actually quite sensitive.\nFor example, it might tempt someone to rewrite it as\n\n uvalue = -value;\n\nwhich is actually incorrect (though our -fwrapv will hide the error).\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 10 Jun 2020 08:25:03 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "On Wed, 10 Jun 2020 at 11:57, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 9 Jun 2020 at 22:08, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n> >\n> > >>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n> >\n> > David> This allows us to speed up a few cases. int2vectorout() should\n> > David> be faster and int8out() becomes a bit faster if we get rid of\n> > David> the strdup() call and replace it with a palloc()/memcpy() call.\n> >\n> > What about removing the memcpy entirely? I don't think we save anything\n> > much useful here by pallocing the exact length, rather than doing what\n> > int4out does and palloc a fixed size and convert the int directly into\n> > it.\n>\n> The attached 0001 patch does this.\n\nPending any objections, I'd like to push both of these patches in the\nnext few days to master.\n\nAnyone object to changing the signature of these functions in 0002, or\nhave concerns about allocating the maximum memory that we might\nrequire in int8out()?\n\nDavid\n\n\n",
"msg_date": "Thu, 11 Jun 2020 15:36:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": ">>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n\n David> Pending any objections, I'd like to push both of these patches\n David> in the next few days to master.\n\nFor the second patch, can we take the opportunity to remove the\nextraneous blank line at the top of pg_ltoa, and add the two missing\n\"extern\"s in builtins.h for pg_ultoa_n and pg_ulltoa_n ?\n\n David> Anyone object to changing the signature of these functions in\n David> 0002, or have concerns about allocating the maximum memory that\n David> we might require in int8out()?\n\nChanging the function signatures seems safe enough. The memory thing\nonly seems likely to be an issue if you allocate a lot of text strings\nfor bigint values without a context reset, and I'm not sure where that\nwould happen (maybe passing large bigint arrays to pl/perl or pl/python\nwould do it?)\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 11 Jun 2020 07:52:51 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "On Thu, 11 Jun 2020 at 18:52, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n> For the second patch, can we take the opportunity to remove the\n> extraneous blank line at the top of pg_ltoa, and add the two missing\n> \"extern\"s in builtins.h for pg_ultoa_n and pg_ulltoa_n ?\n\nI think since we've branched for PG14 now that fixing those should be\nbackpatched to PG13.\n\nDavid\n\n\n",
"msg_date": "Sat, 13 Jun 2020 10:58:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "On Thu, 11 Jun 2020 at 18:52, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n>\n> >>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n>\n> David> Pending any objections, I'd like to push both of these patches\n> David> in the next few days to master.\n>\n> For the second patch, can we take the opportunity to remove the\n> extraneous blank line at the top of pg_ltoa, and add the two missing\n> \"extern\"s in builtins.h for pg_ultoa_n and pg_ulltoa_n ?\n>\n> David> Anyone object to changing the signature of these functions in\n> David> 0002, or have concerns about allocating the maximum memory that\n> David> we might require in int8out()?\n>\n> Changing the function signatures seems safe enough. The memory thing\n> only seems likely to be an issue if you allocate a lot of text strings\n> for bigint values without a context reset, and I'm not sure where that\n> would happen (maybe passing large bigint arrays to pl/perl or pl/python\n> would do it?)\n\nI ended up chickening out of doing the larger allocation\nunconditionally. Instead, I pushed the original idea of doing the\npalloc/memcpy of the length returned by pg_lltoa. That gets us most\nof the gains without the change in memory usage behaviour.\n\nThanks for your reviews on this.\n\nDavid\n\n\n",
"msg_date": "Sat, 13 Jun 2020 12:36:36 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
},
{
"msg_contents": "On Sat, Jun 13, 2020 at 8:36 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I ended up chickening out of doing the larger allocation\n> unconditionally. Instead, I pushed the original idea of doing the\n> palloc/memcpy of the length returned by pg_lltoa. That gets us most\n> of the gains without the change in memory usage behaviour.\n\nThis was still marked as needing review in commitfest, so I marked it\nas committed.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Jul 2020 11:57:47 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speedup usages of pg_*toa() functions"
}
] |
[
{
"msg_contents": "Hi,\n\nI encountered the following assertion failure when I changed\nlogical_decoding_work_mem to lower value while logical replication\nis running. This happend in the master branch.\n\nTRAP: FailedAssertion(\"rb->size < logical_decoding_work_mem * 1024L\", File: \"reorderbuffer.c\", Line: 2403)\n0 postgres 0x000000010755bf80 ExceptionalCondition + 160\n1 postgres 0x00000001072d9f81 ReorderBufferCheckMemoryLimit + 257\n2 postgres 0x00000001072d9b74 ReorderBufferQueueChange + 228\n3 postgres 0x00000001072cd107 DecodeInsert + 391\n4 postgres 0x00000001072cc4ef DecodeHeapOp + 335\n5 postgres 0x00000001072cb9e4 LogicalDecodingProcessRecord + 196\n6 postgres 0x000000010730bf06 XLogSendLogical + 166\n7 postgres 0x000000010730b409 WalSndLoop + 217\n8 postgres 0x00000001073075fc StartLogicalReplication + 716\n9 postgres 0x0000000107305d38 exec_replication_command + 1192\n10 postgres 0x000000010737ea7f PostgresMain + 2463\n11 postgres 0x00000001072a9c4a BackendRun + 570\n12 postgres 0x00000001072a907b BackendStartup + 475\n13 postgres 0x00000001072a7fe1 ServerLoop + 593\n14 postgres 0x00000001072a5a5a PostmasterMain + 5898\n15 postgres 0x0000000107187b59 main + 761\n16 libdyld.dylib 0x00007fff6c00c3d5 start + 1\n\n\nReorderBufferCheckMemoryLimit() explains that it relies on\nthe following (commented) assumption. But this seems incorrect\nwhen logical_decoding_work_mem is decreased. I wonder if we may\nneed to keep evicting the transactions until we don't exceed\nmemory limit.\n\n\t/*\n\t * And furthermore, evicting the transaction should get us below the\n\t * memory limit again - it is not possible that we're still exceeding the\n\t * memory limit after evicting the transaction.\n\t *\n\t * This follows from the simple fact that the selected transaction is at\n\t * least as large as the most recent change (which caused us to go over\n\t * the memory limit). So by evicting it we're definitely back below the\n\t * memory limit.\n\t */\n\tAssert(rb->size < logical_decoding_work_mem * 1024L);\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 9 Jun 2020 17:26:24 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "FailedAssertion at ReorderBufferCheckMemoryLimit()"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 1:56 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> I encountered the following assertion failure when I changed\n> logical_decoding_work_mem to lower value while logical replication\n> is running. This happend in the master branch.\n>\n> TRAP: FailedAssertion(\"rb->size < logical_decoding_work_mem * 1024L\", File: \"reorderbuffer.c\", Line: 2403)\n..\n>\n>\n> ReorderBufferCheckMemoryLimit() explains that it relies on\n> the following (commented) assumption. But this seems incorrect\n> when logical_decoding_work_mem is decreased.\n>\n\nYeah, that could be a problem.\n\n> I wonder if we may\n> need to keep evicting the transactions until we don't exceed\n> memory limit.\n>\n\nYes, that should be the right fix here.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jun 2020 14:28:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: FailedAssertion at ReorderBufferCheckMemoryLimit()"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 9, 2020 at 1:56 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n>\n> > I wonder if we may\n> > need to keep evicting the transactions until we don't exceed\n> > memory limit.\n> >\n>\n> Yes, that should be the right fix here.\n>\n\nCan you please check whether the attached patch fixes the problem for you?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 10 Jun 2020 08:30:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: FailedAssertion at ReorderBufferCheckMemoryLimit()"
},
{
"msg_contents": "\n\nOn 2020/06/10 12:00, Amit Kapila wrote:\n> On Tue, Jun 9, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Jun 9, 2020 at 1:56 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>\n>>> I wonder if we may\n>>> need to keep evicting the transactions until we don't exceed\n>>> memory limit.\n>>>\n>>\n>> Yes, that should be the right fix here.\n>>\n> \n> Can you please check whether the attached patch fixes the problem for you?\n\nThanks for the patch! The patch looks good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 10 Jun 2020 12:45:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: FailedAssertion at ReorderBufferCheckMemoryLimit()"
},
{
"msg_contents": "On Wed, Jun 10, 2020 at 9:15 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/10 12:00, Amit Kapila wrote:\n> > On Tue, Jun 9, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Tue, Jun 9, 2020 at 1:56 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>\n> >>> I wonder if we may\n> >>> need to keep evicting the transactions until we don't exceed\n> >>> memory limit.\n> >>>\n> >>\n> >> Yes, that should be the right fix here.\n> >>\n> >\n> > Can you please check whether the attached patch fixes the problem for you?\n>\n> Thanks for the patch! The patch looks good to me.\n>\n\nThanks, pushed!\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jun 2020 16:48:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: FailedAssertion at ReorderBufferCheckMemoryLimit()"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nWhen some clients connect to database in idle state, postgres do not close the idle sessions,\nhere i add a new GUC idle_session_timeout to let postgres close the idle sessions, it samilar\nto idle_in_transaction_session_timeout.\n\nBest, regards.\n\nJapin Li",
"msg_date": "Tue, 9 Jun 2020 09:02:42 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Terminate the idle sessions"
},
{
"msg_contents": "On Tuesday, June 9, 2020, Li Japin <japinli@hotmail.com> wrote:\n\n> Hi, hackers\n>\n> When some clients connect to database in idle state, postgres do not close\n> the idle sessions,\n> here i add a new GUC idle_session_timeout to let postgres close the idle\n> sessions, it samilar\n> to idle_in_transaction_session_timeout\n>\n\nI’m curious as to the use case because I cannot imagine using this. Idle\nconnections are normal. Seems better to monitor them and conditionally\nexecute the disconnect backend function from the monitoring layer than\nindiscriminately disconnect based upon time. Though i do see an\ninteresting case for attaching to specific login user accounts that only\nmanually login and want the equivalent of a timed screen lock.\n\nDavid J.\n\nOn Tuesday, June 9, 2020, Li Japin <japinli@hotmail.com> wrote:Hi, hackers\n\nWhen some clients connect to database in idle state, postgres do not close the idle sessions,\nhere i add a new GUC idle_session_timeout to let postgres close the idle sessions, it samilar\nto idle_in_transaction_session_timeout\nI’m curious as to the use case because I cannot imagine using this. Idle connections are normal. Seems better to monitor them and conditionally execute the disconnect backend function from the monitoring layer than indiscriminately disconnect based upon time. Though i do see an interesting case for attaching to specific login user accounts that only manually login and want the equivalent of a timed screen lock.David J.",
"msg_date": "Tue, 9 Jun 2020 07:35:08 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Jun 9, 2020, at 10:35 PM, David G. Johnston <david.g.johnston@gmail.com<mailto:david.g.johnston@gmail.com>> wrote:\r\n\r\nI’m curious as to the use case because I cannot imagine using this. Idle connections are normal. Seems better to monitor them and conditionally execute the disconnect backend function from the monitoring layer than indiscriminately disconnect based upon time.\r\n\r\nI agree with you. But we can also give the user to control the idle sessions lifetime.\r\n\n\n\n\n\n\n\n\n\nOn Jun 9, 2020, at 10:35 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\nI’m\r\n curious as to the use case because I cannot imagine using this. Idle connections are normal. Seems better to monitor them and conditionally execute the disconnect backend function from the monitoring layer than indiscriminately disconnect based upon time. \n\n\n\nI agree with you. But we can also give the user to control the idle sessions lifetime.",
"msg_date": "Wed, 10 Jun 2020 05:20:36 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Wed, Jun 10, 2020 at 05:20:36AM +0000, Li Japin wrote:\n> I agree with you. But we can also give the user to control the idle\n> sessions lifetime.\n\nIdle sessions staying around can be a problem in the long run as they\nimpact snapshot building. You could for example use a background\nworker to do this work, like that:\nhttps://github.com/michaelpq/pg_plugins/tree/master/kill_idle\n--\nMichael",
"msg_date": "Wed, 10 Jun 2020 17:25:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Jun 10, 2020, at 4:25 PM, Michael Paquier <michael@paquier.xyz<mailto:michael@paquier.xyz>> wrote:\n\nIdle sessions staying around can be a problem in the long run as they\nimpact snapshot building. You could for example use a background\nworker to do this work, like that:\nhttps://github.com/michaelpq/pg_plugins/tree/master/kill_idle\n\nWhy not implement it in the core of Postgres? Are there any disadvantages of\nimplementing it in the core of Postgres?\n\nJapin Li\n\n\n\n\n\n\n\n\n\nOn Jun 10, 2020, at 4:25 PM, Michael Paquier <michael@paquier.xyz> wrote:\n\nIdle\n sessions staying around can be a problem in the long run as they\nimpact\n snapshot building. You could for example use a background\nworker\n to do this work, like that:\nhttps://github.com/michaelpq/pg_plugins/tree/master/kill_idle\n\n\n\nWhy not implement it in the core of Postgres? Are there any disadvantages of\nimplementing it in the core of Postgres?\n\n\nJapin Li",
"msg_date": "Wed, 10 Jun 2020 13:53:12 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": ">\n> > Why not implement it in the core of Postgres? Are there any\ndisadvantages of\n> implementing it in the core of Postgres?\nI was surprised this wasn't a feature when I looked into it a couple years\nago. I'd use it if it were built in, but I am not installing something\nextra just for this.\n\n> I’m curious as to the use case because I cannot imagine using this.\n\nMy use case is, I have a primary application that connects to the DB, most\nusers work through that (setting is useless for this scenario, app manages\nit's connections well enough). I also have a number of internal users who\ndeal with data ingestion and connect to the DB directly to work, and those\nusers sometimes leave query windows open for days accidentally. Generally\nnot an issue, but would be nice to be able to time those connections out.\n\nJust my $0.02, but I am +1.\n-Adam\n\n> Why not implement it in the core of Postgres? Are there any disadvantages of> implementing it in the core of Postgres? I was surprised this wasn't a feature when I looked into it a couple years ago. I'd use it if it were built in, but I am not installing something extra just for this.> I’m curious as to the use case because I cannot imagine using this.My use case is, I have a primary application that connects to the DB, most users work through that (setting is useless for this scenario, app manages it's connections well enough). I also have a number of internal users who deal with data ingestion and connect to the DB directly to work, and those users sometimes leave query windows open for days accidentally. Generally not an issue, but would be nice to be able to time those connections out. Just my $0.02, but I am +1.-Adam",
"msg_date": "Wed, 10 Jun 2020 10:27:11 -0400",
"msg_from": "Adam Brusselback <adambrusselback@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Jun 10, 2020, at 10:27 PM, Adam Brusselback <adambrusselback@gmail.com<mailto:adambrusselback@gmail.com>> wrote:\n\nMy use case is, I have a primary application that connects to the DB, most users work through that (setting is useless for this scenario, app manages it's connections well enough). I also have a number of internal users who deal with data ingestion and connect to the DB directly to work, and those users sometimes leave query windows open for days accidentally. Generally not an issue, but would be nice to be able to time those connections out.\n\nIf there is no big impact, I think we might add it builtin.\n\nJapin Li\n\n\n\n\n\n\n\n\n\nOn Jun 10, 2020, at 10:27 PM, Adam Brusselback <adambrusselback@gmail.com> wrote:\n\nMy\n use case is, I have a primary application that connects to the DB, most users work through that (setting is useless for this scenario, app manages it's connections well enough). I also have a number of internal users who deal with data ingestion and connect\n to the DB directly to work, and those users sometimes leave query windows open for days accidentally. Generally not an issue, but would be nice to be able to time those connections out. \n\n\n\nIf there is no big impact, I think we might add it builtin.\n\n\nJapin Li",
"msg_date": "Thu, 11 Jun 2020 13:57:24 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nI applied this patch to the PG13 branch and generally this feature works as described. The new \"idle_session_timeout\" that controls the idle session disconnection is not in the default postgresql.conf and I think it should be included there with default value 0, which means disabled. \r\nThere is currently no enforced minimum value for \"idle_session_timeout\" (except for value 0 for disabling the feature), so user can put any value larger than 0 and it could be very small like 500 or even 50 millisecond, this would make any psql connection to disconnect shortly after it has connected, which may not be ideal. Many systems I have worked with have 30 minutes inactivity timeout by default, and I think it would be better and safer to enforce a reasonable minimum timeout value",
"msg_date": "Mon, 10 Aug 2020 21:42:42 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Mon, Aug 10, 2020 at 2:43 PM Cary Huang <cary.huang@highgo.ca> wrote:\n\n> There is currently no enforced minimum value for \"idle_session_timeout\"\n> (except for value 0 for disabling the feature), so user can put any value\n> larger than 0 and it could be very small like 500 or even 50 millisecond,\n> this would make any psql connection to disconnect shortly after it has\n> connected, which may not be ideal. Many systems I have worked with have 30\n> minutes inactivity timeout by default, and I think it would be better and\n> safer to enforce a reasonable minimum timeout value\n\n\nI'd accept a value of say 1,000 being minimum in order to reinforce the\nfact that a unit-less input, while possible, is taken to be milliseconds\nand such small values most likely mean the user has made a mistake. I\nwould not choose a minimum allowed value solely based on our concept of\n\"reasonable\". I don't imagine a value of say 10 seconds, while seemingly\nunreasonable, is going to be unsafe.\n\nDavid J.\n\nOn Mon, Aug 10, 2020 at 2:43 PM Cary Huang <cary.huang@highgo.ca> wrote:There is currently no enforced minimum value for \"idle_session_timeout\" (except for value 0 for disabling the feature), so user can put any value larger than 0 and it could be very small like 500 or even 50 millisecond, this would make any psql connection to disconnect shortly after it has connected, which may not be ideal. Many systems I have worked with have 30 minutes inactivity timeout by default, and I think it would be better and safer to enforce a reasonable minimum timeout valueI'd accept a value of say 1,000 being minimum in order to reinforce the fact that a unit-less input, while possible, is taken to be milliseconds and such small values most likely mean the user has made a mistake. I would not choose a minimum allowed value solely based on our concept of \"reasonable\". I don't imagine a value of say 10 seconds, while seemingly unreasonable, is going to be unsafe.David J.",
"msg_date": "Mon, 10 Aug 2020 14:49:36 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Hi,\r\n\r\nOn Aug 11, 2020, at 5:42 AM, Cary Huang <cary.huang@highgo.ca<mailto:cary.huang@highgo.ca>> wrote:\r\n\r\nI applied this patch to the PG13 branch and generally this feature works as described. The new \"idle_session_timeout\" that controls the idle session disconnection is not in the default postgresql.conf and I think it should be included there with default value 0, which means disabled.\r\n\r\nThanks for looking at it!\r\n\r\nI’ve attached a new version that add “idle_session_timeout” in the default postgresql.conf.\r\n\r\nOn Mon, Aug 10, 2020 at 2:43 PM Cary Huang <cary.huang@highgo.ca<mailto:cary.huang@highgo.ca>> wrote:\r\nThere is currently no enforced minimum value for \"idle_session_timeout\" (except for value 0 for disabling the feature), so user can put any value larger than 0 and it could be very small like 500 or even 50 millisecond, this would make any psql connection to disconnect shortly after it has connected, which may not be ideal. Many systems I have worked with have 30 minutes inactivity timeout by default, and I think it would be better and safer to enforce a reasonable minimum timeout value\r\n\r\nI'd accept a value of say 1,000 being minimum in order to reinforce the fact that a unit-less input, while possible, is taken to be milliseconds and such small values most likely mean the user has made a mistake. I would not choose a minimum allowed value solely based on our concept of \"reasonable\". I don't imagine a value of say 10 seconds, while seemingly unreasonable, is going to be unsafe.\r\n\r\nI think David is right, see “idle_in_transaction_session_timeout”, it also doesn’t have a “reasonable” minimum value.",
"msg_date": "Tue, 11 Aug 2020 03:14:58 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 8:45 AM Li Japin <japinli@hotmail.com> wrote:\n>\n> I’ve attached a new version that add “idle_session_timeout” in the default postgresql.conf.\n>\n\nHi, I would like to just mention a use case I thought of while discussing [1]:\n\nIn postgres_fdw: assuming we use idle_in_session_timeout on remote\nbackends, the remote sessions will be closed after timeout, but the\nlocally cached connection cache entries still exist and become stale.\nThe subsequent queries that may use the cached connections will fail,\nof course these subsequent queries can retry the connections only at\nthe beginning of a remote txn but not in the middle of a remote txn,\nas being discussed in [2]. For instance, in a long running local txn,\nlet say we used a remote connection at the beginning of the local\ntxn(note that it will open a remote session and it's entry is cached\nin local connection cache), only we use the cached connection later at\nsome point in the local txn, by then let say the\nidle_in_session_timeout has happened on the remote backend and the\nremote session would have been closed, the long running local txn will\nfail instead of succeeding.\n\nI think, since the idle_session_timeout is by default disabled, we\nhave no problem. My thought is what if a user enables the\nfeature(knowingly or unknowingly) on the remote backend? If the user\nknows about the above scenario, that may be fine. On the other hand,\neither we can always the feature on the remote backend(at the\nbeginning of the remote txn, like we set for some other configuration\nsettings see - configure_remote_session() in connection.c) or how\nabout mentioning the above scenario in this feature documentation?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACU1NBQo9mihA15dFf6udkOi7m0u2_s5QJ6dzk%3DZQyVbwQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/flat/CALj2ACUAi23vf1WiHNar_LksM9EDOWXcbHCo-fD4Mbr1d%3D78YQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 14 Aug 2020 11:45:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Aug 14, 2020, at 2:15 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com<mailto:bharath.rupireddyforpostgres@gmail.com>> wrote:\n\nI think, since the idle_session_timeout is by default disabled, we\nhave no problem. My thought is what if a user enables the\nfeature(knowingly or unknowingly) on the remote backend? If the user\nknows about the above scenario, that may be fine. On the other hand,\neither we can always the feature on the remote backend(at the\nbeginning of the remote txn, like we set for some other configuration\nsettings see - configure_remote_session() in connection.c) or how\nabout mentioning the above scenario in this feature documentation?\n\nThough we can disable the idle_session_timeout when using postgres_fdw,\nthere still has locally cached connection cache entries when the remote sessions\nterminated by accident. AFAIK, you have provided a patch to solve this\nproblem, and it is in current CF [1].\n\n[1] - https://commitfest.postgresql.org/29/2651/\n\nBest Regards,\nJapin Li.\n\n\n\n\n\n\n\n\n\nOn Aug 14, 2020, at 2:15 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\nI\n think, since the idle_session_timeout is by default disabled, we\nhave\n no problem. My thought is what if a user enables the\nfeature(knowingly\n or unknowingly) on the remote backend? If the user\nknows\n about the above scenario, that may be fine. On the other hand,\neither\n we can always the feature on the remote backend(at the\nbeginning\n of the remote txn, like we set for some other configuration\nsettings\n see - configure_remote_session() in connection.c) or how\nabout\n mentioning the above scenario in this feature documentation?\n\n\n\nThough we can disable the idle_session_timeout when using postgres_fdw,\nthere still has locally cached connection cache entries when the remote sessions\nterminated by accident. AFAIK, you have provided a patch to solve this\nproblem, and it is in current CF [1].\n\n\n[1] - https://commitfest.postgresql.org/29/2651/\n\n\nBest Regards,\nJapin Li.",
"msg_date": "Fri, 14 Aug 2020 08:02:27 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 1:32 PM Li Japin <japinli@hotmail.com> wrote:\n>\n> On Aug 14, 2020, at 2:15 PM, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I think, since the idle_session_timeout is by default disabled, we\n> have no problem. My thought is what if a user enables the\n> feature(knowingly or unknowingly) on the remote backend? If the user\n> knows about the above scenario, that may be fine. On the other hand,\n> either we can always the feature on the remote backend(at the\n> beginning of the remote txn, like we set for some other configuration\n> settings see - configure_remote_session() in connection.c) or how\n> about mentioning the above scenario in this feature documentation?\n>\n> Though we can disable the idle_session_timeout when using postgres_fdw,\n> there still has locally cached connection cache entries when the remote\nsessions\n> terminated by accident. AFAIK, you have provided a patch to solve this\n> problem, and it is in current CF [1].\n>\n> [1] - https://commitfest.postgresql.org/29/2651/\n>\n\nYes, that solution can retry the cached connections at only the beginning\nof the remote txn and not at the middle of the remote txn and that makes\nsense as we can not retry connecting to a different remote backend in the\nmiddle of a remote txn.\n\n+1 for disabling the idle_session_timeout feature in case of postgres_fdw.\nThis can avoid the remote backends to timeout during postgres_fdw usages.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Aug 14, 2020 at 1:32 PM Li Japin <japinli@hotmail.com> wrote:>> On Aug 14, 2020, at 2:15 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:>> I think, since the idle_session_timeout is by default disabled, we> have no problem. My thought is what if a user enables the> feature(knowingly or unknowingly) on the remote backend? If the user> knows about the above scenario, that may be fine. On the other hand,> either we can always the feature on the remote backend(at the> beginning of the remote txn, like we set for some other configuration> settings see - configure_remote_session() in connection.c) or how> about mentioning the above scenario in this feature documentation?>> Though we can disable the idle_session_timeout when using postgres_fdw,> there still has locally cached connection cache entries when the remote sessions> terminated by accident. AFAIK, you have provided a patch to solve this> problem, and it is in current CF [1].>> [1] - https://commitfest.postgresql.org/29/2651/>Yes, that solution can retry the cached connections at only the beginning of the remote txn and not at the middle of the remote txn and that makes sense as we can not retry connecting to a different remote backend in the middle of a remote txn.+1 for disabling the idle_session_timeout feature in case of postgres_fdw. This can avoid the remote backends to timeout during postgres_fdw usages.With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 17 Aug 2020 19:28:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Hello.\n\nAt Mon, 17 Aug 2020 19:28:10 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Fri, Aug 14, 2020 at 1:32 PM Li Japin <japinli@hotmail.com> wrote:\n> >\n> > On Aug 14, 2020, at 2:15 PM, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I think, since the idle_session_timeout is by default disabled, we\n> > have no problem. My thought is what if a user enables the\n> > feature(knowingly or unknowingly) on the remote backend? If the user\n> > knows about the above scenario, that may be fine. On the other hand,\n> > either we can always the feature on the remote backend(at the\n> > beginning of the remote txn, like we set for some other configuration\n> > settings see - configure_remote_session() in connection.c) or how\n> > about mentioning the above scenario in this feature documentation?\n> >\n> > Though we can disable the idle_session_timeout when using postgres_fdw,\n> > there still has locally cached connection cache entries when the remote\n> sessions\n> > terminated by accident. AFAIK, you have provided a patch to solve this\n> > problem, and it is in current CF [1].\n> >\n> > [1] - https://commitfest.postgresql.org/29/2651/\n> >\n> \n> Yes, that solution can retry the cached connections at only the beginning\n> of the remote txn and not at the middle of the remote txn and that makes\n> sense as we can not retry connecting to a different remote backend in the\n> middle of a remote txn.\n> \n> +1 for disabling the idle_session_timeout feature in case of postgres_fdw.\n> This can avoid the remote backends to timeout during postgres_fdw usages.\n\nThe same already happens for idle_in_transaction_session_timeout and\nwe can use \"ALTER ROLE/DATABASE SET\" to dislable or loosen them, it's\na bit cumbersome, though. I don't think we should (at least\nimplicitly) disable those timeouts ad-hockerly for postgres_fdw.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 18 Aug 2020 10:19:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Aug 18, 2020, at 9:19 AM, Kyotaro Horiguchi <horikyota.ntt@gmail.com<mailto:horikyota.ntt@gmail.com>> wrote:\n\nThe same already happens for idle_in_transaction_session_timeout and\nwe can use \"ALTER ROLE/DATABASE SET\" to dislable or loosen them, it's\na bit cumbersome, though. I don't think we should (at least\nimplicitly) disable those timeouts ad-hockerly for postgres_fdw.\n\n+1.\n\n\n\n\n\n\n\n\n\n\nOn Aug 18, 2020, at 9:19 AM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\nThe\n same already happens for idle_in_transaction_session_timeout and\nwe\n can use \"ALTER ROLE/DATABASE SET\" to dislable or loosen them, it's\na\n bit cumbersome, though. I don't think we should (at least\nimplicitly)\n disable those timeouts ad-hockerly for postgres_fdw.\n\n\n\n\n+1.",
"msg_date": "Tue, 18 Aug 2020 02:12:51 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Tue, Aug 18, 2020 at 2:13 PM Li Japin <japinli@hotmail.com> wrote:\n> On Aug 18, 2020, at 9:19 AM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> The same already happens for idle_in_transaction_session_timeout and\n> we can use \"ALTER ROLE/DATABASE SET\" to dislable or loosen them, it's\n> a bit cumbersome, though. I don't think we should (at least\n> implicitly) disable those timeouts ad-hockerly for postgres_fdw.\n>\n> +1.\n\nThis seems like a reasonable feature to me.\n\nThe delivery of the error message explaining what happened is probably\nnot reliable, so to some clients and on some operating systems this\nwill be indistinguishable from a dropped network connection or other\nerror, but that's OK and we already have that problem with the\nexisting timeout-based disconnection feature.\n\nThe main problem I have with it is the high frequency setitimer()\ncalls. If you enable both statement_timeout and idle_session_timeout,\nthen we get up to huge number of system calls, like the following\nstrace -c output for a few seconds of one backend under pgbench -S\nworkload shows:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 39.45 0.118685 0 250523 setitimer\n 29.98 0.090200 0 125275 sendto\n 24.30 0.073107 0 126235 973 recvfrom\n 6.01 0.018068 0 20950 pread64\n 0.26 0.000779 0 973 epoll_wait\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.300839 523956 973 total\n\nThere's a small but measurable performance drop from this, as also\ndiscussed in another thread about another kind of timeout[1]. Maybe\nwe should try to fix that with something like the attached?\n\n[1] https://www.postgresql.org/message-id/flat/77def86b27e41f0efcba411460e929ae%40postgrespro.ru",
"msg_date": "Mon, 31 Aug 2020 12:51:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Aug 31, 2020, at 8:51 AM, Thomas Munro <thomas.munro@gmail.com<mailto:thomas.munro@gmail.com>> wrote:\n\nThe main problem I have with it is the high frequency setitimer()\ncalls. If you enable both statement_timeout and idle_session_timeout,\nthen we get up to huge number of system calls, like the following\nstrace -c output for a few seconds of one backend under pgbench -S\nworkload shows:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n39.45 0.118685 0 250523 setitimer\n29.98 0.090200 0 125275 sendto\n24.30 0.073107 0 126235 973 recvfrom\n 6.01 0.018068 0 20950 pread64\n 0.26 0.000779 0 973 epoll_wait\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.300839 523956 973 total\n\nHi, Thomas,\n\nCould you give the more details about the test instructions?\n\n\n\n\n\n\n\n\n\nOn Aug 31, 2020, at 8:51 AM, Thomas Munro <thomas.munro@gmail.com> wrote:\n\nThe\n main problem I have with it is the high frequency setitimer()\ncalls.\n If you enable both statement_timeout and idle_session_timeout,\nthen\n we get up to huge number of system calls, like the following\nstrace\n -c output for a few seconds of one backend under pgbench -S\nworkload\n shows:\n\n%\n time seconds usecs/call calls errors syscall\n------\n ----------- ----------- --------- --------- ----------------\n39.45\n 0.118685 0 250523 setitimer\n29.98\n 0.090200 0 125275 sendto\n24.30\n 0.073107 0 126235 973 recvfrom\n 6.01\n 0.018068 0 20950 pread64\n 0.26\n 0.000779 0 973 epoll_wait\n------\n ----------- ----------- --------- --------- ----------------\n100.00\n 0.300839 523956 973 total\n\n\n\n\nHi, Thomas,\n\n\nCould you give the more details about the test instructions?",
"msg_date": "Mon, 31 Aug 2020 02:40:37 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 2:40 PM Li Japin <japinli@hotmail.com> wrote:\n> Could you give the more details about the test instructions?\n\nHi Japin,\n\nSure. Because I wasn't trying to get reliable TPS number or anything,\nI just used a simple short read-only test with one connection, like\nthis:\n\npgbench -i -s10 postgres\npgbench -T60 -Mprepared -S postgres\n\nThen I looked for the active backend and ran strace -c -p XXX for a\nfew seconds and hit ^C to get the counters. I doubt the times are\nvery accurate, but the number of calls is informative.\n\nIf you do that on a server running with -c statement_timeout=10s, you\nsee one setitimer() per transaction. If you also use -c\nidle_session_timeout=10s at the same time, you see two.\n\n\n",
"msg_date": "Mon, 31 Aug 2020 15:43:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "At Mon, 31 Aug 2020 12:51:20 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Tue, Aug 18, 2020 at 2:13 PM Li Japin <japinli@hotmail.com> wrote:\n> > On Aug 18, 2020, at 9:19 AM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > The same already happens for idle_in_transaction_session_timeout and\n> > we can use \"ALTER ROLE/DATABASE SET\" to dislable or loosen them, it's\n> > a bit cumbersome, though. I don't think we should (at least\n> > implicitly) disable those timeouts ad-hockerly for postgres_fdw.\n> >\n> > +1.\n> \n> This seems like a reasonable feature to me.\n> \n> The delivery of the error message explaining what happened is probably\n> not reliable, so to some clients and on some operating systems this\n> will be indistinguishable from a dropped network connection or other\n> error, but that's OK and we already have that problem with the\n> existing timeout-based disconnection feature.\n> \n> The main problem I have with it is the high frequency setitimer()\n> calls. If you enable both statement_timeout and idle_session_timeout,\n> then we get up to huge number of system calls, like the following\n> strace -c output for a few seconds of one backend under pgbench -S\n> workload shows:\n> \n> % time seconds usecs/call calls errors syscall\n> ------ ----------- ----------- --------- --------- ----------------\n> 39.45 0.118685 0 250523 setitimer\n> 29.98 0.090200 0 125275 sendto\n> 24.30 0.073107 0 126235 973 recvfrom\n> 6.01 0.018068 0 20950 pread64\n> 0.26 0.000779 0 973 epoll_wait\n> ------ ----------- ----------- --------- --------- ----------------\n> 100.00 0.300839 523956 973 total\n> \n> There's a small but measurable performance drop from this, as also\n> discussed in another thread about another kind of timeout[1]. Maybe\n> we should try to fix that with something like the attached?\n> \n> [1] https://www.postgresql.org/message-id/flat/77def86b27e41f0efcba411460e929ae%40postgrespro.ru\n\nI think it's worth doing. Maybe we can get rid of doing anything other\nthan removing an entry in the case where we disable a non-nearest\ntimeout.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 31 Aug 2020 13:49:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "> On Aug 31, 2020, at 11:43 AM, Thomas Munro <thomas.munro@gmail.com> wrote:\r\n> \r\n> On Mon, Aug 31, 2020 at 2:40 PM Li Japin <japinli@hotmail.com> wrote:\r\n>> Could you give the more details about the test instructions?\r\n> \r\n> Hi Japin,\r\n> \r\n> Sure. Because I wasn't trying to get reliable TPS number or anything,\r\n> I just used a simple short read-only test with one connection, like\r\n> this:\r\n> \r\n> pgbench -i -s10 postgres\r\n> pgbench -T60 -Mprepared -S postgres\r\n> \r\n> Then I looked for the active backend and ran strace -c -p XXX for a\r\n> few seconds and hit ^C to get the counters. I doubt the times are\r\n> very accurate, but the number of calls is informative.\r\n> \r\n> If you do that on a server running with -c statement_timeout=10s, you\r\n> see one setitimer() per transaction. If you also use -c\r\n> idle_session_timeout=10s at the same time, you see two.\r\n\r\nHi, Thomas,\r\n\r\nThanks for your point out this problem, here is the comparison.\r\n\r\nWithout Optimize settimer usage:\r\n% time seconds usecs/call calls errors syscall\r\n------ ----------- ----------- --------- --------- ----------------\r\n 41.22 1.444851 1 1317033 setitimer\r\n 28.41 0.995936 2 658622 sendto\r\n 24.63 0.863316 1 659116 599 recvfrom\r\n 5.71 0.200275 2 111055 pread64\r\n 0.03 0.001152 2 599 epoll_wait\r\n 0.00 0.000000 0 1 epoll_ctl\r\n------ ----------- ----------- --------- --------- ----------------\r\n100.00 3.505530 2746426 599 total\r\n\r\nWith Optimize settimer usage:\r\n% time seconds usecs/call calls errors syscall\r\n------ ----------- ----------- --------- --------- ----------------\r\n 49.89 1.464332 1 1091429 sendto\r\n 40.83 1.198389 1 1091539 219 recvfrom\r\n 9.26 0.271890 1 183321 pread64\r\n 0.02 0.000482 2 214 epoll_wait\r\n 0.00 0.000013 3 5 setitimer\r\n 0.00 0.000010 2 5 rt_sigreturn\r\n 0.00 0.000000 0 1 epoll_ctl\r\n------ ----------- ----------- --------- --------- ----------------\r\n100.00 2.935116 2366514 219 total\r\n\r\nHere’s a modified version of Thomas’s patch.",
"msg_date": "Mon, 31 Aug 2020 05:33:02 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Dear Li, \r\n\r\nI read your patch, and I think the documentation is too simple to avoid all problems.\r\n(I think if some connection pooling is used, the same problem will occur.)\r\nCould you add some explanations in the doc file? I made an example:\r\n\r\n```\r\nNote that this values should be set to zero if you use postgres_fdw or some\r\nConnection-pooling software, because connections might be closed unexpectedly. \r\n```\r\n\r\nI will send other comments if I find something.\r\n\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 13 Nov 2020 10:27:47 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
},
{
"msg_contents": "On Nov 13, 2020, at 6:27 PM, kuroda.hayato@fujitsu.com<mailto:kuroda.hayato@fujitsu.com> wrote:\n\n\nI read your patch, and I think the documentation is too simple to avoid all problems.\n(I think if some connection pooling is used, the same problem will occur.)\nCould you add some explanations in the doc file? I made an example:\n\n```\nNote that this values should be set to zero if you use postgres_fdw or some\nConnection-pooling software, because connections might be closed unexpectedly.\n```\n\nThanks for your advice! Attached v4.\n\n--\nBest regards\nJapin Li",
"msg_date": "Sun, 15 Nov 2020 10:00:01 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Dear Li,\r\n\r\n> Thanks for your advice! Attached v4.\r\n\r\nI confirmed it. OK.\r\n\r\n> @@ -30,6 +30,7 @@ typedef enum TimeoutId\r\n> \tSTANDBY_DEADLOCK_TIMEOUT,\r\n> \tSTANDBY_TIMEOUT,\r\n> \tSTANDBY_LOCK_TIMEOUT,\r\n> +\tIDLE_SESSION_TIMEOUT,\r\n> \tIDLE_IN_TRANSACTION_SESSION_TIMEOUT,\r\n> \t/* First user-definable timeout reason */\r\n> \tUSER_TIMEOUT,\r\n\r\nI'm not familiar with timeout, but I can see that the priority of idle-session is set lower than transaction-timeout.\r\nCould you explain the reason? In my image this timeout locates at the lowest layer, so it might have the lowest \r\npriority.\r\n\r\nOther codes are still checked :-(.\r\n\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Mon, 16 Nov 2020 05:22:35 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
},
{
"msg_contents": "Hi Kuroda,\n\nOn Nov 16, 2020, at 1:22 PM, kuroda.hayato@fujitsu.com<mailto:kuroda.hayato@fujitsu.com> wrote:\n\n\n@@ -30,6 +30,7 @@ typedef enum TimeoutId\nSTANDBY_DEADLOCK_TIMEOUT,\nSTANDBY_TIMEOUT,\nSTANDBY_LOCK_TIMEOUT,\n+ IDLE_SESSION_TIMEOUT,\nIDLE_IN_TRANSACTION_SESSION_TIMEOUT,\n/* First user-definable timeout reason */\nUSER_TIMEOUT,\n\nI'm not familiar with timeout, but I can see that the priority of idle-session is set lower than transaction-timeout.\nCould you explain the reason? In my image this timeout locates at the lowest layer, so it might have the lowest\npriority.\n\nMy apologies! I just add a enum for idle session and ignore the comments that says the enum has priority.\nFixed as follows:\n\n@@ -30,8 +30,8 @@ typedef enum TimeoutId\n STANDBY_DEADLOCK_TIMEOUT,\n STANDBY_TIMEOUT,\n STANDBY_LOCK_TIMEOUT,\n- IDLE_SESSION_TIMEOUT,\n IDLE_IN_TRANSACTION_SESSION_TIMEOUT,\n+ IDLE_SESSION_TIMEOUT,\n /* First user-definable timeout reason */\n USER_TIMEOUT,\n /* Maximum number of timeout reasons */\n\nThanks for your review! Attached.\n\n--\nBest regards\nJapin Li",
"msg_date": "Mon, 16 Nov 2020 12:40:57 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 5:41 AM Li Japin <japinli@hotmail.com> wrote:\n\n> Thanks for your review! Attached.\n>\n\nReading the doc changes:\n\nI'd rather not name postgres_fdw explicitly, or at least not solely, as a\nreason for setting this to zero. Additionally, using postgres_fdw within\nthe server doesn't cause issues, its using postgres_fdw and the remote\nserver having this setting set to zero that causes a problem.\n\n<note>\nConsider setting this for specific users instead of as a server default.\nClient connections managed by connection poolers, or initiated indirectly\nlike those by a remote postgres_fdw using server, should probably be\nexcluded from this timeout.\n\nText within <para> should be indented one space (you missed both under\nlistitem).\n\nI'd suggest a comment that aside from a bit of resource consumption idle\nsessions do not interfere with the long-running stability of the server,\nunlike idle-in-transaction sessions which are controlled by the other\nconfiguration setting.\n\nDavid J.\n\nOn Mon, Nov 16, 2020 at 5:41 AM Li Japin <japinli@hotmail.com> wrote:\n\n\n\n\n\nThanks for your review! Attached.Reading the doc changes:I'd rather not name postgres_fdw explicitly, or at least not solely, as a reason for setting this to zero. Additionally, using postgres_fdw within the server doesn't cause issues, its using postgres_fdw and the remote server having this setting set to zero that causes a problem.<note>Consider setting this for specific users instead of as a server default. Client connections managed by connection poolers, or initiated indirectly like those by a remote postgres_fdw using server, should probably be excluded from this timeout.Text within <para> should be indented one space (you missed both under listitem).I'd suggest a comment that aside from a bit of resource consumption idle sessions do not interfere with the long-running stability of the server, unlike idle-in-transaction sessions which are controlled by the other configuration setting.David J.",
"msg_date": "Mon, 16 Nov 2020 16:59:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "--\nBest regards\nJapin Li\n\nOn Nov 17, 2020, at 7:59 AM, David G. Johnston <david.g.johnston@gmail.com<mailto:david.g.johnston@gmail.com>> wrote:\n\nOn Mon, Nov 16, 2020 at 5:41 AM Li Japin <japinli@hotmail.com<mailto:japinli@hotmail.com>> wrote:\nThanks for your review! Attached.\n\nReading the doc changes:\n\nI'd rather not name postgres_fdw explicitly, or at least not solely, as a reason for setting this to zero. Additionally, using postgres_fdw within the server doesn't cause issues, its using postgres_fdw and the remote server having this setting set to zero that causes a problem.\n\n<note>\nConsider setting this for specific users instead of as a server default. Client connections managed by connection poolers, or initiated indirectly like those by a remote postgres_fdw using server, should probably be excluded from this timeout.\n\nText within <para> should be indented one space (you missed both under listitem).\n\nThanks for your suggest! How about change document as follows:\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex 6c4e2a1fdc..23e691a7c5 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -8281,17 +8281,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n </term>\n <listitem>\n <para>\n- Terminate any session that has been idle for longer than the specified amount of time.\n+ Terminate any session that has been idle for longer than the specified amount of time.\n </para>\n <para>\n- If this value is specified without units, it is taken as milliseconds.\n- A value of zero (the default) disables the timeout.\n+ If this value is specified without units, it is taken as milliseconds.\n+ A value of zero (the default) disables the timeout.\n </para>\n\n <note>\n <para>\n- This parameter should be set to zero if you use postgres_fdw or some\n- connection-pooling software, because connections might be closed unexpectedly.\n+ This parameter should be set to zero if you use some connection-pooling software, or\n+ PostgreSQL servers used by postgres_fdw, because connections might be closed unexpectedly.\n </para>\n </note>\n\nI'd suggest a comment that aside from a bit of resource consumption idle sessions do not interfere with the long-running stability of the server, unlike idle-in-transaction sessions which are controlled by the other configuration setting.\n\nCould you please explain how the idle-in-transaction interfere the long-running stability?\n\n--\nBest regards\nJapin Li\n\n\n\n\n\n\n\n\n\n\n--\nBest regards\nJapin Li\n\n\n\n\nOn Nov 17, 2020, at 7:59 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\n\n\nOn Mon, Nov 16, 2020 at 5:41 AM Li Japin <japinli@hotmail.com>\n wrote:\n\n\n\n\n\n\n\n\n\nThanks for your review! Attached.\n\n\n\n\n\n\n\n\nReading the doc changes:\n\n\nI'd rather not name postgres_fdw explicitly, or at least not solely, as a reason for setting this to zero. Additionally, using postgres_fdw within the server doesn't cause issues,\n its using postgres_fdw and the remote server having this setting set to zero that causes a problem.\n\n\n<note>\nConsider setting this for specific users instead of as a server default. Client connections managed by connection poolers, or initiated indirectly like those by a remote postgres_fdw\n using server, should probably be excluded from this timeout.\n\n\nText within <para> should be indented one space (you missed both under listitem).\n\n\n\n\n\nThanks for your suggest! How about change document as follows:\n\n\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex 6c4e2a1fdc..23e691a7c5 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -8281,17 +8281,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n </term>\n <listitem>\n <para>\n- Terminate any session that has been idle for longer than the specified amount of time.\n+ Terminate any session that has been idle for longer than the specified amount of time.\n </para>\n <para>\n- If this value is specified without units, it is taken as milliseconds.\n- A value of zero (the default) disables the timeout.\n+ If this value is specified without units, it is taken as milliseconds.\n+ A value of zero (the default) disables the timeout.\n </para>\n\n\n <note>\n <para>\n- This parameter should be set to zero if you use postgres_fdw or some\n- connection-pooling software, because connections might be closed unexpectedly.\n+ This parameter should be set to zero if you use some connection-pooling software, or\n+ PostgreSQL servers used by postgres_fdw, because connections might be closed unexpectedly.\n </para>\n </note>\n\n\n\n\n\nI'd suggest a comment that aside from a bit of resource consumption idle sessions do not interfere with the long-running stability of the server, unlike idle-in-transaction sessions\n which are controlled by the other configuration setting.\n\n\n\n\n\nCould you please explain how the idle-in-transaction interfere the long-running stability?\n\n\n\n--\nBest regards\nJapin Li",
"msg_date": "Tue, 17 Nov 2020 02:45:12 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Monday, November 16, 2020, Li Japin <japinli@hotmail.com> wrote:\n\n>\n> <note>\n> Consider setting this for specific users instead of as a server default.\n> Client connections managed by connection poolers, or initiated indirectly\n> like those by a remote postgres_fdw using server, should probably be\n> excluded from this timeout.\n>\n> <note>\n>\n> <para>\n> - This parameter should be set to zero if you use postgres_fdw or\n> some\n> - connection-pooling software, because connections might be closed\n> unexpectedly.\n> + This parameter should be set to zero if you use some\n> connection-pooling software, or\n> + PostgreSQL servers used by postgres_fdw, because connections\n> might be closed unexpectedly.\n> </para>\n> </note>\n>\n>\nPrefer mine, “or pg servers used by postgres_fdw”, doesn’t flow.\n\n\n> Could you please explain how the idle-in-transaction interfere the\n> long-running stability?\n>\n\n From the docs (next section):\n\nThis allows any locks held by that session to be released and the\nconnection slot to be reused; it also allows tuples visible only to this\ntransaction to be vacuumed. See Section 24.1\n<https://www.postgresql.org/docs/13/routine-vacuuming.html> for more\ndetails about this.\n\nDavid J.\n\nOn Monday, November 16, 2020, Li Japin <japinli@hotmail.com> wrote:\n\n\n<note>\nConsider setting this for specific users instead of as a server default. Client connections managed by connection poolers, or initiated indirectly like those by a remote postgres_fdw\n using server, should probably be excluded from this timeout.\n\n\n <note>\n <para>\n- This parameter should be set to zero if you use postgres_fdw or some\n- connection-pooling software, because connections might be closed unexpectedly.\n+ This parameter should be set to zero if you use some connection-pooling software, or\n+ PostgreSQL servers used by postgres_fdw, because connections might be closed unexpectedly.\n </para>\n </note>\n\nPrefer mine, “or pg servers used by postgres_fdw”, doesn’t flow. \nCould you please explain how the idle-in-transaction interfere the long-running stability?From the docs (next section):This allows any locks held by that session to be released and the connection slot to be reused; it also allows tuples visible only to this transaction to be vacuumed. See Section 24.1 for more details about this.David J.",
"msg_date": "Mon, 16 Nov 2020 19:53:59 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Nov 17, 2020, at 10:53 AM, David G. Johnston <david.g.johnston@gmail.com<mailto:david.g.johnston@gmail.com>> wrote:\r\n\r\nOn Monday, November 16, 2020, Li Japin <japinli@hotmail.com<mailto:japinli@hotmail.com>> wrote:\r\n\r\n<note>\r\nConsider setting this for specific users instead of as a server default. Client connections managed by connection poolers, or initiated indirectly like those by a remote postgres_fdw using server, should probably be excluded from this timeout.\r\n\r\n <note>\r\n <para>\r\n- This parameter should be set to zero if you use postgres_fdw or some\r\n- connection-pooling software, because connections might be closed unexpectedly.\r\n+ This parameter should be set to zero if you use some connection-pooling software, or\r\n+ PostgreSQL servers used by postgres_fdw, because connections might be closed unexpectedly.\r\n </para>\r\n </note>\r\n\r\n\r\nPrefer mine, “or pg servers used by postgres_fdw”, doesn’t flow.\r\n\r\nCould you please explain how the idle-in-transaction interfere the long-running stability?\r\n\r\nFrom the docs (next section):\r\n\r\nThis allows any locks held by that session to be released and the connection slot to be reused; it also allows tuples visible only to this transaction to be vacuumed. See Section 24.1<https://www.postgresql.org/docs/13/routine-vacuuming.html> for more details about this.\r\n\r\nThanks David! Attached.\r\n\r\n--\r\nBest regards\r\nJapin Li",
"msg_date": "Tue, 17 Nov 2020 03:27:05 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Dear Li, David,\r\n\r\n> Additionally, using postgres_fdw within the server doesn't cause issues,\r\n> its using postgres_fdw and the remote server having this setting set to zero that causes a problem.\r\n\r\nI didn't know the fact that postgres_fdw can use within the server... Thanks.\r\n\r\nI read optimize-setitimer patch, and looks basically good. I put what I understanding,\r\nso please confirm it whether your implementation is correct.\r\n(Maybe I missed some simultaneities, so please review anyone...)\r\n\r\n[besic consept]\r\n\r\nsigalrm_due_at means the time that interval timer will ring, and sigalrm_delivered means who calls schedule_alarm().\r\nIf fin_time of active_timeouts[0] is larger than or equal to sigalrm_due_at,\r\nstop calling setitimer because handle_sig_alarm() will be call sooner.\r\n\r\n[when call setitimer]\r\n\r\nIn the attached patch, setitimer() will be only called the following scenarios:\r\n\r\n* when handle_sig_alarm() is called due to the pqsignal\r\n* when a timeout is registered and its fin_time is later than active_timeous[0]\r\n* when disable a timeout\r\n* when handle_sig_alarm() is interrupted and rescheduled(?)\r\n\r\nAccording to comments, handle_sig_alarm() may be interrupted because of the ereport.\r\nI think if handle_sig_alarm() is interrupted before subsutituting sigalrm_due_at to true,\r\ninterval timer will be never set. Is it correct, or is my assumption wrong?\r\n\r\nLastly, I found that setitimer is obsolete and should change to another one. According to my man page:\r\n\r\n```\r\nPOSIX.1-2001, SVr4, 4.4BSD (this call first appeared in 4.2BSD).\r\nPOSIX.1-2008 marks getitimer() and setitimer() obsolete,\r\nrecommending the use of the POSIX timers API (timer_gettime(2), timer_settime(2), etc.) instead.\r\n```\r\n\r\nDo you have an opinion for this? I think it should be changed\r\nif all platform can support timer_settime system call, but this fix affects all timeouts,\r\nso more considerations might be needed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\n\n\n\n\n\n\n\n\nDear Li, David,\n \n> Additionally, using postgres_fdw within the server doesn't cause issues,\n> its using postgres_fdw and the remote server having this setting set to zero that causes a problem.\n \nI didn't know the fact that postgres_fdw can use within the server... Thanks.\n \nI read optimize-setitimer patch, and looks basically good. I put what I understanding,\nso please confirm it whether your implementation is correct.\n(Maybe I missed some simultaneities, so please review anyone...)\n \n[besic consept]\n \nsigalrm_due_at means the time that interval timer will ring, and sigalrm_delivered means who calls schedule_alarm().\nIf fin_time of active_timeouts[0] is larger than or equal to sigalrm_due_at,\nstop calling setitimer because handle_sig_alarm() will be call sooner.\n \n[when call setitimer]\n \nIn the attached patch, setitimer() will be only called the following scenarios:\n \n* when handle_sig_alarm() is called due to the pqsignal\n* when a timeout is registered and its fin_time is later than active_timeous[0]\n* when disable a timeout\n* when handle_sig_alarm() is interrupted and rescheduled(?)\n \nAccording to comments, handle_sig_alarm() may be interrupted because of the ereport.\nI think if handle_sig_alarm() is interrupted before subsutituting sigalrm_due_at to true,\ninterval timer will be never set. Is it correct, or is my assumption wrong?\n \nLastly, I found that setitimer is obsolete and should change to another one. According to my man page:\n \n```\nPOSIX.1-2001, SVr4, 4.4BSD (this call first appeared in 4.2BSD).\nPOSIX.1-2008 marks getitimer() and setitimer() obsolete,\nrecommending the use of the POSIX timers API (timer_gettime(2), timer_settime(2), etc.) instead.\n```\n \nDo you have an opinion for this? I think it should be changed\nif all platform can support timer_settime system call, but this fix affects all timeouts,\nso more considerations might be needed.\n \nBest Regards,\n\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Tue, 17 Nov 2020 06:07:35 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
},
{
"msg_contents": "On Nov 17, 2020, at 2:07 PM, kuroda.hayato@fujitsu.com<mailto:kuroda.hayato@fujitsu.com> wrote:\r\n\r\nDear Li, David,\r\n\r\n> Additionally, using postgres_fdw within the server doesn't cause issues,\r\n> its using postgres_fdw and the remote server having this setting set to zero that causes a problem.\r\n\r\nI didn't know the fact that postgres_fdw can use within the server... Thanks.\r\n\r\nI read optimize-setitimer patch, and looks basically good. I put what I understanding,\r\nso please confirm it whether your implementation is correct.\r\n(Maybe I missed some simultaneities, so please review anyone...)\r\n\r\n[besic consept]\r\n\r\nsigalrm_due_at means the time that interval timer will ring, and sigalrm_delivered means who calls schedule_alarm().\r\nIf fin_time of active_timeouts[0] is larger than or equal to sigalrm_due_at,\r\nstop calling setitimer because handle_sig_alarm() will be call sooner.\r\n\r\n\r\nAgree. The sigalrm_delivered means a timer has been handled by handle_sig_alarm(), so we should call setitimer() in next schedule_alarm().\r\n\r\n[when call setitimer]\r\n\r\nIn the attached patch, setitimer() will be only called the following scenarios:\r\n\r\n* when handle_sig_alarm() is called due to the pqsignal\r\n* when a timeout is registered and its fin_time is later than active_timeous[0]\r\n* when disable a timeout\r\n* when handle_sig_alarm() is interrupted and rescheduled(?)\r\n\r\nAccording to comments, handle_sig_alarm() may be interrupted because of the ereport.\r\nI think if handle_sig_alarm() is interrupted before subsutituting sigalrm_due_at to true,\r\ninterval timer will be never set. Is it correct, or is my assumption wrong?\r\n\r\n\r\nI’m not familiar with the system interrupt, however, the sigalrm_due_at is subsutitue between HOLD_INTERRUPTS()\r\nand RESUM_INTERRUPTS(), so I think it cannot be interrupted. The following comments comes from miscadmin.h.\r\n\r\n> The HOLD_INTERRUPTS() and RESUME_INTERRUPTS() macros\r\n> allow code to ensure that no cancel or die interrupt will be accepted,\r\n> even if CHECK_FOR_INTERRUPTS() gets called in a subroutine. The interrupt\r\n> will be held off until CHECK_FOR_INTERRUPTS() is done outside any\r\n> HOLD_INTERRUPTS() ... RESUME_INTERRUPTS() section.\r\n\r\n\r\nLastly, I found that setitimer is obsolete and should change to another one. According to my man page:\r\n\r\n```\r\nPOSIX.1-2001, SVr4, 4.4BSD (this call first appeared in 4.2BSD).\r\nPOSIX.1-2008 marks getitimer() and setitimer() obsolete,\r\nrecommending the use of the POSIX timers API (timer_gettime(2), timer_settime(2), etc.) instead.\r\n```\r\n\r\nDo you have an opinion for this? I think it should be changed\r\nif all platform can support timer_settime system call, but this fix affects all timeouts,\r\nso more considerations might be needed.\r\n\r\n\r\nNot sure! I find that Win32 do not support setitimer(), PostgreSQL emulate setitimer() by creating a persistent thread to handle\r\nthe timer setting and notification upon timeout.\r\n\r\nSo if we want to replace it, I think we should open a new thread to achieve this.\r\n\r\n--\r\nBest regards\r\nJapin Li\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Nov 17, 2020, at 2:07 PM, \r\nkuroda.hayato@fujitsu.com wrote:\n\n\n\n\nDear Li, David,\n\n \n\n> Additionally, using postgres_fdw within the server doesn't cause issues,\n\n> its using postgres_fdw and the remote server having this setting set to zero that causes a problem.\n\n \n\nI didn't know the fact that postgres_fdw can use within the server... Thanks.\n\n \n\nI read optimize-setitimer patch, and looks basically good. I put what I understanding,\n\nso please confirm it whether your implementation is correct.\n\n(Maybe I missed some simultaneities, so please review anyone...)\n\n \n\n[besic consept]\n\n \n\nsigalrm_due_at means the time that interval timer will ring, and sigalrm_delivered means who calls schedule_alarm().\n\nIf fin_time of active_timeouts[0] is larger than or equal to sigalrm_due_at,\n\nstop calling setitimer because handle_sig_alarm() will be call sooner.\n\n \n\n\n\n\n\r\nAgree. The sigalrm_delivered means a timer has been handled by handle_sig_alarm(), so we should call setitimer() in next schedule_alarm().\n\n\n\n\n\n[when call setitimer]\n\n \n\nIn the attached patch, setitimer() will be only called the following scenarios:\n\n \n\n* when handle_sig_alarm() is called due to the pqsignal\n\n* when a timeout is registered and its fin_time is later than active_timeous[0]\n\n* when disable a timeout\n\n* when handle_sig_alarm() is interrupted and rescheduled(?)\n\n \n\nAccording to comments, handle_sig_alarm() may be interrupted because of the ereport.\n\nI think if handle_sig_alarm() is interrupted before subsutituting sigalrm_due_at to true,\n\ninterval timer will be never set. Is it correct, or is my assumption wrong?\n\n\n\n\n\n\n\n \n\n\n\n\n\nI’m not familiar with the system interrupt, however, the sigalrm_due_at is subsutitue between HOLD_INTERRUPTS()\nand RESUM_INTERRUPTS(), so I think it cannot be interrupted. The following comments comes from miscadmin.h.\n\n\n\n\n\n> The HOLD_INTERRUPTS() and RESUME_INTERRUPTS() macros\n> allow code to ensure that no cancel or die interrupt will be accepted,\n> even if CHECK_FOR_INTERRUPTS() gets called in a subroutine. The interrupt\n> will be held off until CHECK_FOR_INTERRUPTS() is done outside any\n> HOLD_INTERRUPTS() ... RESUME_INTERRUPTS() section.\n\n\n\n\n\n\n\n\n\n\n\nLastly, I found that setitimer is obsolete and should change to another one. According to my man page:\n\n \n\n```\n\nPOSIX.1-2001, SVr4, 4.4BSD (this call first appeared in 4.2BSD).\n\nPOSIX.1-2008 marks getitimer() and setitimer() obsolete,\n\nrecommending the use of the POSIX timers API (timer_gettime(2), timer_settime(2), etc.) instead.\n\n```\n\n \n\nDo you have an opinion for this? I think it should be changed\n\nif all platform can support timer_settime system call, but this fix affects all timeouts,\n\nso more considerations might be needed.\n\n \n\n\n\n\n\r\nNot sure! I find that Win32 do not support setitimer(), PostgreSQL emulate setitimer() by creating a persistent thread to handle\nthe timer setting and notification upon timeout.\n\n\nSo if we want to replace it, I think we should open a new thread to achieve this.\n\n\n\n--\nBest regards\nJapin Li",
"msg_date": "Tue, 17 Nov 2020 09:01:58 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Dear Li,\r\n\r\n> I’m not familiar with the system interrupt, however,\r\n> the sigalrm_due_at is subsutitue between HOLD_INTERRUPTS()\r\n> and RESUM_INTERRUPTS(), so I think it cannot be interrupted.\r\n> The following comments comes from miscadmin.h.\r\n\r\nRight, but how about before HOLD_INTERRUPTS()?\r\nIf so, only calling handle_sig_alarm() is occurred, and\r\nSetitimer will not be set, I think.\r\n\r\n> Not sure! I find that Win32 do not support setitimer(),\r\n> PostgreSQL emulate setitimer() by creating a persistent thread to handle\r\n> the timer setting and notification upon timeout.\r\n\r\nYes, set/getitimer() is the systemcall, and\r\nimplemented only in the unix-like system.\r\nBut I rethink that such a fix is categorized in the refactoring and\r\nwe should separate topic. Hence we can ignore here.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\nFrom: Li Japin <japinli@hotmail.com> \r\nSent: Tuesday, November 17, 2020 6:02 PM\r\nTo: Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com>\r\nCc: David G. Johnston <david.g.johnston@gmail.com>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Thomas Munro <thomas.munro@gmail.com>; bharath.rupireddyforpostgres@gmail.com; pgsql-hackers@lists.postgresql.org\r\nSubject: Re: Terminate the idle sessions\r\n\r\n\r\nOn Nov 17, 2020, at 2:07 PM, mailto:kuroda.hayato@fujitsu.com wrote:\r\n\r\nDear Li, David,\r\n \r\n> Additionally, using postgres_fdw within the server doesn't cause issues,\r\n> its using postgres_fdw and the remote server having this setting set to zero that causes a problem.\r\n \r\nI didn't know the fact that postgres_fdw can use within the server... Thanks.\r\n \r\nI read optimize-setitimer patch, and looks basically good. I put what I understanding,\r\nso please confirm it whether your implementation is correct.\r\n(Maybe I missed some simultaneities, so please review anyone...)\r\n \r\n[besic consept]\r\n \r\nsigalrm_due_at means the time that interval timer will ring, and sigalrm_delivered means who calls schedule_alarm().\r\nIf fin_time of active_timeouts[0] is larger than or equal to sigalrm_due_at,\r\nstop calling setitimer because handle_sig_alarm() will be call sooner.\r\n \r\n\r\nAgree. The sigalrm_delivered means a timer has been handled by handle_sig_alarm(), so we should call setitimer() in next schedule_alarm().\r\n\r\n\r\n[when call setitimer]\r\n \r\nIn the attached patch, setitimer() will be only called the following scenarios:\r\n \r\n* when handle_sig_alarm() is called due to the pqsignal\r\n* when a timeout is registered and its fin_time is later than active_timeous[0]\r\n* when disable a timeout\r\n* when handle_sig_alarm() is interrupted and rescheduled(?)\r\n \r\nAccording to comments, handle_sig_alarm() may be interrupted because of the ereport.\r\nI think if handle_sig_alarm() is interrupted before subsutituting sigalrm_due_at to true,\r\ninterval timer will be never set. Is it correct, or is my assumption wrong?\r\n \r\n\r\nI’m not familiar with the system interrupt, however, the sigalrm_due_at is subsutitue between HOLD_INTERRUPTS()\r\nand RESUM_INTERRUPTS(), so I think it cannot be interrupted. The following comments comes from miscadmin.h.\r\n\r\n> The HOLD_INTERRUPTS() and RESUME_INTERRUPTS() macros\r\n> allow code to ensure that no cancel or die interrupt will be accepted,\r\n> even if CHECK_FOR_INTERRUPTS() gets called in a subroutine. The interrupt\r\n> will be held off until CHECK_FOR_INTERRUPTS() is done outside any\r\n> HOLD_INTERRUPTS() ... RESUME_INTERRUPTS() section.\r\n\r\n\r\n\r\nLastly, I found that setitimer is obsolete and should change to another one. According to my man page:\r\n \r\n```\r\nPOSIX.1-2001, SVr4, 4.4BSD (this call first appeared in 4.2BSD).\r\nPOSIX.1-2008 marks getitimer() and setitimer() obsolete,\r\nrecommending the use of the POSIX timers API (timer_gettime(2), timer_settime(2), etc.) instead.\r\n```\r\n \r\nDo you have an opinion for this? I think it should be changed\r\nif all platform can support timer_settime system call, but this fix affects all timeouts,\r\nso more considerations might be needed.\r\n \r\n\r\nNot sure! I find that Win32 do not support setitimer(), PostgreSQL emulate setitimer() by creating a persistent thread to handle\r\nthe timer setting and notification upon timeout.\r\n\r\nSo if we want to replace it, I think we should open a new thread to achieve this.\r\n\r\n--\r\nBest regards\r\nJapin Li\r\n\r\n",
"msg_date": "Wed, 18 Nov 2020 02:40:07 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
},
{
"msg_contents": "On Nov 18, 2020, at 10:40 AM, kuroda.hayato@fujitsu.com<mailto:kuroda.hayato@fujitsu.com> wrote:\r\n\r\n\r\nI’m not familiar with the system interrupt, however,\r\nthe sigalrm_due_at is subsutitue between HOLD_INTERRUPTS()\r\nand RESUM_INTERRUPTS(), so I think it cannot be interrupted.\r\nThe following comments comes from miscadmin.h.\r\n\r\nRight, but how about before HOLD_INTERRUPTS()?\r\nIf so, only calling handle_sig_alarm() is occurred, and\r\nSetitimer will not be set, I think.\r\n\r\nYeah, it might be occurred. Any suggestions to fix it?\r\n\r\n--\r\nBest regards\r\nJapin Li\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Nov 18, 2020, at 10:40 AM, \r\nkuroda.hayato@fujitsu.com wrote:\n\n\n\n\r\nI’m not familiar with the system interrupt, however,\r\nthe sigalrm_due_at is subsutitue between HOLD_INTERRUPTS()\r\nand RESUM_INTERRUPTS(), so I think it cannot be interrupted.\r\nThe following comments comes from miscadmin.h.\n\n\nRight,\r\n but how about before HOLD_INTERRUPTS()?\nIf\r\n so, only calling handle_sig_alarm() is occurred, and\nSetitimer\r\n will not be set, I think.\n\n\n\n\r\nYeah, it might be occurred. Any suggestions to fix it?\n\n\n--\nBest regards\nJapin Li",
"msg_date": "Wed, 18 Nov 2020 06:01:28 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Dear Li, \r\n\r\n> Yeah, it might be occurred. Any suggestions to fix it?\r\n\r\nOops.. I forgot putting my suggestion. Sorry.\r\nHow about substituting sigalrm_delivered to true in the reschedule_timeouts()?\r\nMaybe this processing looks strange, so some comments should be put too.\r\nHere is an example:\r\n\r\n```diff\r\n@@ -423,7 +423,14 @@ reschedule_timeouts(void)\r\n \r\n /* Reschedule the interrupt, if any timeouts remain active. */\r\n if (num_active_timeouts > 0)\r\n+ {\r\n+ /*\r\n+ * sigalrm_delivered is set to true,\r\n+ * because any intrreputions might be occured.\r\n+ */\r\n+ sigalrm_delivered = true;\r\n schedule_alarm(GetCurrentTimestamp());\r\n+ }\r\n }\r\n```\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 18 Nov 2020 06:22:50 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
},
{
"msg_contents": "On Nov 18, 2020, at 2:22 PM, kuroda.hayato@fujitsu.com<mailto:kuroda.hayato@fujitsu.com> wrote:\n\nOops.. I forgot putting my suggestion. Sorry.\nHow about substituting sigalrm_delivered to true in the reschedule_timeouts()?\nMaybe this processing looks strange, so some comments should be put too.\nHere is an example:\n\n```diff\n@@ -423,7 +423,14 @@ reschedule_timeouts(void)\n\n /* Reschedule the interrupt, if any timeouts remain active. */\n if (num_active_timeouts > 0)\n+ {\n+ /*\n+ * sigalrm_delivered is set to true,\n+ * because any intrreputions might be occured.\n+ */\n+ sigalrm_delivered = true;\n schedule_alarm(GetCurrentTimestamp());\n+ }\n}\n```\n\nThanks for your suggestion. Attached!\n\n--\nBest regards\nJapin Li",
"msg_date": "Wed, 18 Nov 2020 06:57:13 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Dear Li,\r\n\r\n> Thanks for your suggestion. Attached!\r\n\r\nI prefer your comments:-).\r\n\r\nI think this patch is mostly good.\r\nI looked whole the codes again and I found the following comment in the PostgresMain():\r\n\r\n```c\r\n\t\t/*\r\n\t\t * (5) turn off the idle-in-transaction timeout\r\n\t\t */\r\n```\r\n\r\nPlease mention about idle-session timeout and check another comment.\r\n\r\nBest Regards, \r\nHayato Kuroda\r\n",
"msg_date": "Thu, 19 Nov 2020 08:32:46 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
},
{
"msg_contents": "hi, Kuroda\n\nOn 11/19/20 4:32 PM, kuroda.hayato@fujitsu.com wrote:\n> Dear Li,\n>\n>> Thanks for your suggestion. Attached!\n> I prefer your comments:-).\n>\n> I think this patch is mostly good.\n> I looked whole the codes again and I found the following comment in the PostgresMain():\n>\n> ```c\n> \t\t/*\n> \t\t * (5) turn off the idle-in-transaction timeout\n> \t\t */\n> ```\n>\n> Please mention about idle-session timeout and check another comment.\nThanks! Add the comment for idle-session timeout.\n\n--\nBest regards\nJapin Li",
"msg_date": "Thu, 19 Nov 2020 17:35:05 +0800",
"msg_from": "japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Dear Li,\r\n\r\n> Thanks! Add the comment for idle-session timeout.\r\n\r\nI confirmed it. OK.\r\n\r\n\r\nI don't have any comments anymore. If no one has,\r\nI will change the status few days later.\r\nOther comments or suggestions to him?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 20 Nov 2020 02:05:09 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
},
{
"msg_contents": "No one have any comments, patch tester says OK, and I think this works well.\r\nI changed status to \"Ready for Committer.\"\r\n\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n-----Original Message-----\r\nFrom: kuroda.hayato@fujitsu.com <kuroda.hayato@fujitsu.com> \r\nSent: Friday, November 20, 2020 11:05 AM\r\nTo: 'japin' <japinli@hotmail.com>\r\nCc: David G. Johnston <david.g.johnston@gmail.com>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Thomas Munro <thomas.munro@gmail.com>; bharath.rupireddyforpostgres@gmail.com; pgsql-hackers@lists.postgresql.org\r\nSubject: RE: Terminate the idle sessions\r\n\r\nDear Li,\r\n\r\n> Thanks! Add the comment for idle-session timeout.\r\n\r\nI confirmed it. OK.\r\n\r\n\r\nI don't have any comments anymore. If no one has,\r\nI will change the status few days later.\r\nOther comments or suggestions to him?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 24 Nov 2020 00:01:49 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 5:02 PM kuroda.hayato@fujitsu.com <\nkuroda.hayato@fujitsu.com> wrote:\n\n> No one have any comments, patch tester says OK, and I think this works\n> well.\n> I changed status to \"Ready for Committer.\"\n>\nSome proof-reading:\n\nv8-0001\n\nDocumentation:\n\nMy suggestion wasn't taken for the first note paragraph (review/author\ndisagreement) and the current has the following issues:\n\n\"if you use some connection-pooling\" software doesn't need the word \"some\"\nDon't substitute \"pg\" for the name of the product, PostgreSQL.\nThe word \"used\" is a more stylistic dislike, but \"connected to using\npostgres_fdw\" would be a better choice IMO.\n\nCode (minor, but if you are in there anyway):\n\n(5) turn off ... timeout (there are now two, timeouts should be plural)\n\nv8-0002 - No suggestions\n\nDavid J.\n\nOn Mon, Nov 23, 2020 at 5:02 PM kuroda.hayato@fujitsu.com <kuroda.hayato@fujitsu.com> wrote:No one have any comments, patch tester says OK, and I think this works well.\nI changed status to \"Ready for Committer.\"Some proof-reading:v8-0001Documentation:My suggestion wasn't taken for the first note paragraph (review/author disagreement) and the current has the following issues:\"if you use some connection-pooling\" software doesn't need the word \"some\"Don't substitute \"pg\" for the name of the product, PostgreSQL.The word \"used\" is a more stylistic dislike, but \"connected to using postgres_fdw\" would be a better choice IMO.Code (minor, but if you are in there anyway):(5) turn off ... timeout (there are now two, timeouts should be plural)v8-0002 - No suggestionsDavid J.",
"msg_date": "Mon, 23 Nov 2020 20:39:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Hi, Kuroda\n\nThank for your review.\n\n> On Nov 24, 2020, at 8:01 AM, kuroda.hayato@fujitsu.com wrote:\n> \n> No one have any comments, patch tester says OK, and I think this works well.\n> I changed status to \"Ready for Committer.\"\n> \n> Hayato Kuroda\n> FUJITSU LIMITED\n> \n> -----Original Message-----\n> From: kuroda.hayato@fujitsu.com <kuroda.hayato@fujitsu.com> \n> Sent: Friday, November 20, 2020 11:05 AM\n> To: 'japin' <japinli@hotmail.com>\n> Cc: David G. Johnston <david.g.johnston@gmail.com>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Thomas Munro <thomas.munro@gmail.com>; bharath.rupireddyforpostgres@gmail.com; pgsql-hackers@lists.postgresql.org\n> Subject: RE: Terminate the idle sessions\n> \n> Dear Li,\n> \n>> Thanks! Add the comment for idle-session timeout.\n> \n> I confirmed it. OK.\n> \n> \n> I don't have any comments anymore. If no one has,\n> I will change the status few days later.\n> Other comments or suggestions to him?\n> \n> Best Regards,\n> Hayato Kuroda\n> FUJITSU LIMITED\n> \n\n--\nBest regards\nJapin Li\n\n",
"msg_date": "Tue, 24 Nov 2020 06:22:09 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Hi, David\r\n\r\nThanks for your suggestion!\r\n\r\nOn Nov 24, 2020, at 11:39 AM, David G. Johnston <david.g.johnston@gmail.com<mailto:david.g.johnston@gmail.com>> wrote:\r\n\r\nOn Mon, Nov 23, 2020 at 5:02 PM kuroda.hayato@fujitsu.com<mailto:kuroda.hayato@fujitsu.com> <kuroda.hayato@fujitsu.com<mailto:kuroda.hayato@fujitsu.com>> wrote:\r\nNo one have any comments, patch tester says OK, and I think this works well.\r\nI changed status to \"Ready for Committer.\"\r\nSome proof-reading:\r\n\r\nv8-0001\r\n\r\nDocumentation:\r\n\r\nMy suggestion wasn't taken for the first note paragraph (review/author disagreement) and the current has the following issues:\r\n\r\nSorry for ignoring this suggestion.\r\n\r\n\"if you use some connection-pooling\" software doesn't need the word \"some\"\r\nDon't substitute \"pg\" for the name of the product, PostgreSQL.\r\nThe word \"used\" is a more stylistic dislike, but \"connected to using postgres_fdw\" would be a better choice IMO.\r\n\r\nCode (minor, but if you are in there anyway):\r\n\r\n\r\nHow about use “foreign-data wrapper” replace “postgres_fdw”?\r\n\r\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\r\nindex b71a116be3..a3a50e7bdb 100644\r\n--- a/doc/src/sgml/config.sgml\r\n+++ b/doc/src/sgml/config.sgml\r\n@@ -8293,8 +8293,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\r\n\r\n <note>\r\n <para>\r\n- This parameter should be set to zero if you use some connection-pooling software,\r\n- or pg servers used by postgres_fdw, because connections might be closed unexpectedly.\r\n+ This parameter should be set to zero if you use connection-pooling software,\r\n+ or <productname>PostgreSQL</productname> servers connected to using foreign-data\r\n+ wrapper, because connections might be closed unexpectedly.\r\n </para>\r\n <para>\r\n Aside from a bit of resource consumption idle sessions do not interfere with the\r\n\r\n(5) turn off ... timeout (there are now two, timeouts should be plural)\r\n\r\nFixed.\r\n\r\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\r\nindex ba2369b72d..bcf8c610fd 100644\r\n--- a/src/backend/tcop/postgres.c\r\n+++ b/src/backend/tcop/postgres.c\r\n@@ -4278,7 +4278,7 @@ PostgresMain(int argc, char *argv[],\r\n DoingCommandRead = false;\r\n\r\n /*\r\n- * (5) turn off the idle-in-transaction and idle-session timeout\r\n+ * (5) turn off the idle-in-transaction and idle-session timeouts\r\n */\r\n if (disable_idle_in_transaction_timeout)\r\n {\r\n\r\n\r\nI will send a new patch if there is not other comments.\r\n\r\n--\r\nBest Regards,\r\nJapin Li\r\n\r\n\n\n\n\n\n\nHi, David\n\n\nThanks for your suggestion!\n\n\nOn Nov 24, 2020, at 11:39 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\n\n\n\nOn Mon, Nov 23, 2020 at 5:02 PM\r\nkuroda.hayato@fujitsu.com <kuroda.hayato@fujitsu.com> wrote:\n\n\n\n\r\nNo one have any comments, patch tester says OK, and I think this works well.\r\nI changed status to \"Ready for Committer.\"\n\n\n\nSome proof-reading:\n\n\n\nv8-0001\n\n\n\nDocumentation:\n\n\nMy suggestion wasn't taken for the first note paragraph (review/author disagreement) and the current has the following issues:\n\n\n\n\n\n\r\nSorry for ignoring this suggestion. \n\n\n\n\n\n\"if you use some connection-pooling\" software doesn't need the word \"some\"\nDon't substitute \"pg\" for the name of the product, PostgreSQL.\nThe word \"used\" is a more stylistic dislike, but \"connected to using postgres_fdw\" would be a better choice IMO.\n\n\nCode (minor, but if you are in there anyway):\n\n\n\n\n\n\n\n\n\nHow about use “foreign-data wrapper” replace “postgres_fdw”?\n\n\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex b71a116be3..a3a50e7bdb 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -8293,8 +8293,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n\n\n <note>\n <para>\n- This parameter should be set to zero if you use some connection-pooling software,\n- or pg servers used by postgres_fdw, because connections might be closed unexpectedly.\n+ This parameter should be set to zero if you use connection-pooling software,\n+ or <productname>PostgreSQL</productname> servers connected to using foreign-data\n+ wrapper, because connections might be closed unexpectedly.\n </para>\n <para>\n Aside from a bit of resource consumption idle sessions do not interfere with the\n\n\n\n\n\n\n(5) turn off ... timeout (there are now two, timeouts should be plural)\n\n\n\n\n\n\nFixed.\n\n\n\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\nindex ba2369b72d..bcf8c610fd 100644\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -4278,7 +4278,7 @@ PostgresMain(int argc, char *argv[],\n DoingCommandRead = false;\n\n\n /*\n- * (5) turn off the idle-in-transaction and idle-session timeout\n+ * (5) turn off the idle-in-transaction and idle-session timeouts\n */\n if (disable_idle_in_transaction_timeout)\n {\n\n\n\n\n\nI will send a new patch if there is not other comments.\n\n\n--\nBest Regards,\nJapin Li",
"msg_date": "Tue, 24 Nov 2020 06:22:23 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 11:22 PM Li Japin <japinli@hotmail.com> wrote:\n\n>\n> How about use “foreign-data wrapper” replace “postgres_fdw”?\n>\n\nI don't see much value in avoiding mentioning that specific term - my\nproposal turned it into an example instead of being exclusive.\n\n\n> - This parameter should be set to zero if you use some\n> connection-pooling software,\n> - or pg servers used by postgres_fdw, because connections might be\n> closed unexpectedly.\n> + This parameter should be set to zero if you use\n> connection-pooling software,\n> + or <productname>PostgreSQL</productname> servers connected to\n> using foreign-data\n> + wrapper, because connections might be closed unexpectedly.\n> </para>\n>\n\nMaybe:\n\n+ or your PostgreSQL server receives connection from postgres_fdw or\nsimilar middleware.\n+ Such software is expected to self-manage its connections.\nDavid J.\n\nOn Mon, Nov 23, 2020 at 11:22 PM Li Japin <japinli@hotmail.com> wrote:\n\n\nHow about use “foreign-data wrapper” replace “postgres_fdw”?I don't see much value in avoiding mentioning that specific term - my proposal turned it into an example instead of being exclusive.\n\n- This parameter should be set to zero if you use some connection-pooling software,\n- or pg servers used by postgres_fdw, because connections might be closed unexpectedly.\n+ This parameter should be set to zero if you use connection-pooling software,\n+ or <productname>PostgreSQL</productname> servers connected to using foreign-data\n+ wrapper, because connections might be closed unexpectedly.\n </para>Maybe:+ or your PostgreSQL server receives connection from postgres_fdw or similar middleware.+ Such software is expected to self-manage its connections.David J.",
"msg_date": "Tue, 24 Nov 2020 08:20:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Nov 24, 2020, at 11:20 PM, David G. Johnston <david.g.johnston@gmail.com<mailto:david.g.johnston@gmail.com>> wrote:\r\n\r\nOn Mon, Nov 23, 2020 at 11:22 PM Li Japin <japinli@hotmail.com<mailto:japinli@hotmail.com>> wrote:\r\n\r\nHow about use “foreign-data wrapper” replace “postgres_fdw”?\r\n\r\nI don't see much value in avoiding mentioning that specific term - my proposal turned it into an example instead of being exclusive.\r\n\r\n\r\n- This parameter should be set to zero if you use some connection-pooling software,\r\n- or pg servers used by postgres_fdw, because connections might be closed unexpectedly.\r\n+ This parameter should be set to zero if you use connection-pooling software,\r\n+ or <productname>PostgreSQL</productname> servers connected to using foreign-data\r\n+ wrapper, because connections might be closed unexpectedly.\r\n </para>\r\n\r\nMaybe:\r\n\r\n+ or your PostgreSQL server receives connection from postgres_fdw or similar middleware.\r\n+ Such software is expected to self-manage its connections.\r\n\r\nThank you for your suggestion and patient! Fixed.\r\n\r\n```\r\n+ <para>\r\n+ This parameter should be set to zero if you use connection-pooling software,\r\n+ or <productname>PostgreSQL</productname> servers connected to using postgres_fdw\r\n+ or similar middleware (such software is expected to self-manage its connections),\r\n+ because connections might be closed unexpectedly.\r\n+ </para>\r\n```\r\n\r\n--\r\nBest regards\r\nJapin Li",
"msg_date": "Wed, 25 Nov 2020 02:18:28 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On 25.11.2020 05:18, Li Japin wrote:\n>\n>\n>\n>> On Nov 24, 2020, at 11:20 PM, David G. Johnston \n>> <david.g.johnston@gmail.com <mailto:david.g.johnston@gmail.com>> wrote:\n>>\n>> On Mon, Nov 23, 2020 at 11:22 PM Li Japin <japinli@hotmail.com \n>> <mailto:japinli@hotmail.com>> wrote:\n>>\n>>\n>> How about use “foreign-data wrapper” replace “postgres_fdw”?\n>>\n>>\n>> I don't see much value in avoiding mentioning that specific term - my \n>> proposal turned it into an example instead of being exclusive.\n>>\n>>\n>> - This parameter should be set to zero if you use some\n>> connection-pooling software,\n>> - or pg servers used by postgres_fdw, because connections\n>> might be closed unexpectedly.\n>> + This parameter should be set to zero if you use\n>> connection-pooling software,\n>> + or <productname>PostgreSQL</productname> servers\n>> connected to using foreign-data\n>> + wrapper, because connections might be closed unexpectedly.\n>> </para>\n>>\n>>\n>> Maybe:\n>>\n>> + or your PostgreSQL server receives connection from postgres_fdw or \n>> similar middleware.\n>> + Such software is expected to self-manage its connections.\n>\n> Thank you for your suggestion and patient! Fixed.\n>\n> ```\n> + <para>\n> + This parameter should be set to zero if you use \n> connection-pooling software,\n> + or <productname>PostgreSQL</productname> servers connected \n> to using postgres_fdw\n> + or similar middleware (such software is expected to \n> self-manage its connections),\n> + because connections might be closed unexpectedly.\n> + </para>\n> ```\n>\n> --\n> Best regards\n> Japin Li\n>\n\nStatus update for a commitfest entry.\nAs far as I see, all recommendations from reviewers were addressed in \nthe last version of the patch.\n\nIt passes CFbot successfully, so I move it to Ready For Committer.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 25.11.2020 05:18, Li Japin wrote:\n\n\n\n\n\n\n\n\n\n\n\n\nOn Nov 24, 2020, at 11:20 PM, David G.\n Johnston <david.g.johnston@gmail.com>\n wrote:\n\n\n\nOn\n Mon, Nov 23, 2020 at 11:22 PM Li Japin <japinli@hotmail.com>\n wrote:\n\n\n\n\n\n\n\nHow about use “foreign-data wrapper”\n replace “postgres_fdw”?\n\n\n\n\n\nI\n don't see much value in avoiding mentioning that\n specific term - my proposal turned it into an\n example instead of being exclusive.\n\n\n\n\n\n\n\n\n- This parameter should be\n set to zero if you use some connection-pooling\n software,\n- or pg servers used by\n postgres_fdw, because connections might be\n closed unexpectedly.\n+ This parameter should be\n set to zero if you use connection-pooling\n software,\n+ or\n <productname>PostgreSQL</productname>\n servers connected to using foreign-data\n+ wrapper, because\n connections might be closed unexpectedly.\n </para>\n\n\n\n\n\n\nMaybe:\n\n\n+ or\n your PostgreSQL server receives connection from\n postgres_fdw or similar middleware.\n+ Such\n software is expected to self-manage its connections.\n\n\n\n\n\nThank you for your suggestion and patient! Fixed.\n\n\n```\n\n+ <para>\n+ This parameter should be set to zero\n if you use connection-pooling software,\n+ or\n <productname>PostgreSQL</productname> servers\n connected to using postgres_fdw\n+ or similar middleware (such software\n is expected to self-manage its connections),\n+ because connections might be closed\n unexpectedly.\n+ </para>\n\n```\n\n\n\n--\nBest regards\nJapin Li\n\n\n\n\n\n\n\n\n\n\n\n\nStatus update for a commitfest entry. \n As far as I see, all recommendations from reviewers were addressed\n in the last version of the patch.\n\n It passes CFbot successfully, so I move it to Ready For Committer.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 1 Dec 2020 16:55:49 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Li Japin <japinli@hotmail.com> writes:\n> [ v9-0001-Allow-terminating-the-idle-sessions.patch ]\n\nI've reviewed and pushed this. A few notes:\n\n* Thomas' patch for improving timeout.c seems like a great idea, but\nit did indeed have a race condition, and I felt the comments could do\nwith more work.\n\n* I'm not entirely comfortable with the name \"idle_session_timeout\",\nbecause it sounds like it applies to all idle states, but actually\nit only applies when we're not in a transaction. I left the name\nalone and tried to clarify the docs, but I wonder whether a different\nname would be better. (Alternatively, we could say that it does\napply in all cases, making the effective timeout when in a transaction\nthe smaller of the two GUCs. But that'd be more complex to implement\nand less flexible, so I'm not in favor of that answer.)\n\n* The SQLSTATE you chose for the new error condition seems pretty\nrandom. I do not see it in the SQL standard, so using a code that's\nwithin the spec-reserved code range is certainly wrong. I went with\n08P02 (the \"P\" makes it outside the reserved range), but I'm not\nreally happy either with using class 08 (\"Connection Exception\",\nwhich seems to be mainly meant for connection-request failures),\nor the fact that ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT is\npractically identical but it's not even in the same major class.\nNow 25 (\"Invalid Transaction State\") is certainly not right for\nthis new error, but I think what that's telling us is that 25 was a\nmisguided choice for the other error. In a green field I think I'd\nput both of them in class 53 (\"Insufficient Resources\") or maybe class\n57 (\"Operator Intervention\"). Can we get away with renumbering the\nolder error at this point? In any case I'd be inclined to move the\nnew error to 53 or 57 --- anybody have a preference which?\n\n* I think the original intent in timeout.h was to have 10 slots\navailable for run-time-defined timeout reasons. This is the third\npredefined one we've added since the header was created, so it's\nstarting to look a little tight. I adjusted the code to hopefully\npreserve 10 free slots going forward.\n\n* I noted the discussion about dropping setitimer in place of some\nnewer kernel API. I'm not sure that that is worth the trouble in\nisolation, but it might be worth doing if we can switch the whole\nmodule over to relying on CLOCK_MONOTONIC, so as to make its behavior\nless flaky if the sysadmin steps the system clock. Portability\nmight be a big issue here though, plus we'd have to figure out how\nwe want to define enable_timeout_at(), which is unlikely to want to\nuse CLOCK_MONOTONIC values. In any case, that's surely material\nfor a whole new thread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Jan 2021 18:55:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * Thomas' patch for improving timeout.c seems like a great idea, but\n> it did indeed have a race condition, and I felt the comments could do\n> with more work.\n\nOops, and thanks! Very happy to see this one in the tree.\n\n> * I'm not entirely comfortable with the name \"idle_session_timeout\",\n> because it sounds like it applies to all idle states, but actually\n> it only applies when we're not in a transaction. I left the name\n> alone and tried to clarify the docs, but I wonder whether a different\n> name would be better. (Alternatively, we could say that it does\n> apply in all cases, making the effective timeout when in a transaction\n> the smaller of the two GUCs. But that'd be more complex to implement\n> and less flexible, so I'm not in favor of that answer.)\n\nHmm, it is a bit confusing, but having them separate is indeed more flexible.\n\n> * The SQLSTATE you chose for the new error condition seems pretty\n> random. I do not see it in the SQL standard, so using a code that's\n> within the spec-reserved code range is certainly wrong. I went with\n> 08P02 (the \"P\" makes it outside the reserved range), but I'm not\n> really happy either with using class 08 (\"Connection Exception\",\n> which seems to be mainly meant for connection-request failures),\n> or the fact that ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT is\n> practically identical but it's not even in the same major class.\n> Now 25 (\"Invalid Transaction State\") is certainly not right for\n> this new error, but I think what that's telling us is that 25 was a\n> misguided choice for the other error. In a green field I think I'd\n> put both of them in class 53 (\"Insufficient Resources\") or maybe class\n> 57 (\"Operator Intervention\"). Can we get away with renumbering the\n> older error at this point? In any case I'd be inclined to move the\n> new error to 53 or 57 --- anybody have a preference which?\n\nI don't have a strong view here, but 08 with a P doesn't seem crazy to\nme. Unlike eg 57014 (statement_timeout), 57017 (deadlock_timeout),\n55P03 (lock_timeout... huh, I just noticed that DB2 uses 57017 for\nthat, distinguished from deadlock by another error field), after these\ntimeouts you don't have a session/connection anymore. The two are a\nbit different though: in the older one, you were in a transaction, and\nit seems to me quite newsworthy that your transaction has been\naborted, information that is not conveyed quite so clearly with a\nconnection-related error class.\n\n> * I noted the discussion about dropping setitimer in place of some\n> newer kernel API. I'm not sure that that is worth the trouble in\n> isolation, but it might be worth doing if we can switch the whole\n> module over to relying on CLOCK_MONOTONIC, so as to make its behavior\n> less flaky if the sysadmin steps the system clock. Portability\n> might be a big issue here though, plus we'd have to figure out how\n> we want to define enable_timeout_at(), which is unlikely to want to\n> use CLOCK_MONOTONIC values. In any case, that's surely material\n> for a whole new thread.\n\n+1\n\n\n",
"msg_date": "Thu, 7 Jan 2021 15:03:56 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 3:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jan 7, 2021 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > * The SQLSTATE you chose for the new error condition seems pretty\n> > random. I do not see it in the SQL standard, so using a code that's\n> > within the spec-reserved code range is certainly wrong. I went with\n> > 08P02 (the \"P\" makes it outside the reserved range), but I'm not\n> > really happy either with using class 08 (\"Connection Exception\",\n> > which seems to be mainly meant for connection-request failures),\n> > or the fact that ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT is\n> > practically identical but it's not even in the same major class.\n> > Now 25 (\"Invalid Transaction State\") is certainly not right for\n> > this new error, but I think what that's telling us is that 25 was a\n> > misguided choice for the other error. In a green field I think I'd\n> > put both of them in class 53 (\"Insufficient Resources\") or maybe class\n> > 57 (\"Operator Intervention\"). Can we get away with renumbering the\n> > older error at this point? In any case I'd be inclined to move the\n> > new error to 53 or 57 --- anybody have a preference which?\n>\n> I don't have a strong view here, but 08 with a P doesn't seem crazy to\n> me. Unlike eg 57014 (statement_timeout), 57017 (deadlock_timeout),\n> 55P03 (lock_timeout... huh, I just noticed that DB2 uses 57017 for\n> that, distinguished from deadlock by another error field), after these\n> timeouts you don't have a session/connection anymore. The two are a\n> bit different though: in the older one, you were in a transaction, and\n> it seems to me quite newsworthy that your transaction has been\n> aborted, information that is not conveyed quite so clearly with a\n> connection-related error class.\n\nHmm, on further reflection it's still more similar than different and\nI'd probably have voted for 08xxx for the older one too if I'd been\npaying attention.\n\nOne of the strange things about these errors is that they're\nasynchronous/unsolicited, but they appear to the client to be the\nresponse to their next request (if it doesn't eat ECONNRESET instead).\nThat makes me think we should try to make it clear that it's a sort of\nlower level thing, and not really a response to your next request at\nall. Perhaps 08 does that. But it's not obvious... I see a couple\nof DB2 extension SQLSTATEs saying you have no connection: 57015 =\n\"Connection to the local Db2 not established\" and 51006 = \"A valid\nconnection has not been established\", and then there's standard 08003\n= \"connection does not exist\" which we're currently using to shout\ninto the void when the *client* goes away (and also for dblink failure\nto find named connection, a pretty unrelated meaning).\n\n\n",
"msg_date": "Thu, 7 Jan 2021 16:22:54 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> One of the strange things about these errors is that they're\n> asynchronous/unsolicited, but they appear to the client to be the\n> response to their next request (if it doesn't eat ECONNRESET instead).\n\nRight, which is what makes class 57 (operator intervention) seem\nattractive to me. From the client's standpoint these look little\ndifferent from ERRCODE_ADMIN_SHUTDOWN or ERRCODE_CRASH_SHUTDOWN,\nwhich are in that category.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Jan 2021 22:51:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "--\nBest regards\nJapin Li\n\nOn Jan 7, 2021, at 10:03 AM, Thomas Munro <thomas.munro@gmail.com<mailto:thomas.munro@gmail.com>> wrote:\n\n\n* I'm not entirely comfortable with the name \"idle_session_timeout\",\nbecause it sounds like it applies to all idle states, but actually\nit only applies when we're not in a transaction. I left the name\nalone and tried to clarify the docs, but I wonder whether a different\nname would be better. (Alternatively, we could say that it does\napply in all cases, making the effective timeout when in a transaction\nthe smaller of the two GUCs. But that'd be more complex to implement\nand less flexible, so I'm not in favor of that answer.)\n\nHmm, it is a bit confusing, but having them separate is indeed more flexible.\n\n\nYes! It is a bit confusing. How about interactive_timeout? This is used by MySQL [1].\n\n* The SQLSTATE you chose for the new error condition seems pretty\nrandom. I do not see it in the SQL standard, so using a code that's\nwithin the spec-reserved code range is certainly wrong. I went with\n08P02 (the \"P\" makes it outside the reserved range), but I'm not\nreally happy either with using class 08 (\"Connection Exception\",\nwhich seems to be mainly meant for connection-request failures),\nor the fact that ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT is\npractically identical but it's not even in the same major class.\nNow 25 (\"Invalid Transaction State\") is certainly not right for\nthis new error, but I think what that's telling us is that 25 was a\nmisguided choice for the other error. In a green field I think I'd\nput both of them in class 53 (\"Insufficient Resources\") or maybe class\n57 (\"Operator Intervention\"). Can we get away with renumbering the\nolder error at this point? In any case I'd be inclined to move the\nnew error to 53 or 57 --- anybody have a preference which?\n\nI don't have a strong view here, but 08 with a P doesn't seem crazy to\nme. Unlike eg 57014 (statement_timeout), 57017 (deadlock_timeout),\n55P03 (lock_timeout... huh, I just noticed that DB2 uses 57017 for\nthat, distinguished from deadlock by another error field), after these\ntimeouts you don't have a session/connection anymore. The two are a\nbit different though: in the older one, you were in a transaction, and\nit seems to me quite newsworthy that your transaction has been\naborted, information that is not conveyed quite so clearly with a\nconnection-related error class.\n\nApologize! I just think it is a Connection Exception.\n\n[1] https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_interactive_timeout\n\n\n\n\n\n\n\n\n\n--\nBest regards\nJapin Li\n\n\n\n\nOn Jan 7, 2021, at 10:03 AM, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n\n\n\n* I'm not entirely comfortable with the name \"idle_session_timeout\",\nbecause it sounds like it applies to all idle states, but actually\nit only applies when we're not in a transaction. I left the name\nalone and tried to clarify the docs, but I wonder whether a different\nname would be better. (Alternatively, we could say that it does\napply in all cases, making the effective timeout when in a transaction\nthe smaller of the two GUCs. But that'd be more complex to implement\nand less flexible, so I'm not in favor of that answer.)\n\n\nHmm,\n it is a bit confusing, but having them separate is indeed more flexible.\n\n\n\n\n\nYes! It is a bit confusing. How about interactive_timeout? This is used by MySQL [1].\n\n\n\n\n\n* The SQLSTATE you chose for the new error condition seems pretty\nrandom. I do not see it in the SQL standard, so using a code that's\nwithin the spec-reserved code range is certainly wrong. I went with\n08P02 (the \"P\" makes it outside the reserved range), but I'm not\nreally happy either with using class 08 (\"Connection Exception\",\nwhich seems to be mainly meant for connection-request failures),\nor the fact that ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT is\npractically identical but it's not even in the same major class.\nNow 25 (\"Invalid Transaction State\") is certainly not right for\nthis new error, but I think what that's telling us is that 25 was a\nmisguided choice for the other error. In a green field I think I'd\nput both of them in class 53 (\"Insufficient Resources\") or maybe class\n57 (\"Operator Intervention\"). Can we get away with renumbering the\nolder error at this point? In any case I'd be inclined to move the\nnew error to 53 or 57 --- anybody have a preference which?\n\n\nI\n don't have a strong view here, but 08 with a P doesn't seem crazy to\nme.\n Unlike eg 57014 (statement_timeout), 57017 (deadlock_timeout),\n55P03\n (lock_timeout... huh, I just noticed that DB2 uses 57017 for\nthat,\n distinguished from deadlock by another error field), after these\ntimeouts\n you don't have a session/connection anymore. The two are a\nbit\n different though: in the older one, you were in a transaction, and\nit\n seems to me quite newsworthy that your transaction has been\naborted,\n information that is not conveyed quite so clearly with a\nconnection-related\n error class.\n\n\n\n\nApologize! I just think it is a Connection Exception.\n\n\n[1]\n\nhttps://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_interactive_timeout",
"msg_date": "Thu, 7 Jan 2021 05:04:05 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 4:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > One of the strange things about these errors is that they're\n> > asynchronous/unsolicited, but they appear to the client to be the\n> > response to their next request (if it doesn't eat ECONNRESET instead).\n>\n> Right, which is what makes class 57 (operator intervention) seem\n> attractive to me. From the client's standpoint these look little\n> different from ERRCODE_ADMIN_SHUTDOWN or ERRCODE_CRASH_SHUTDOWN,\n> which are in that category.\n\nYeah, that's a good argument.\n\n\n",
"msg_date": "Thu, 7 Jan 2021 18:40:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jan 7, 2021 at 4:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Thomas Munro <thomas.munro@gmail.com> writes:\n>>> One of the strange things about these errors is that they're\n>>> asynchronous/unsolicited, but they appear to the client to be the\n>>> response to their next request (if it doesn't eat ECONNRESET instead).\n\n>> Right, which is what makes class 57 (operator intervention) seem\n>> attractive to me. From the client's standpoint these look little\n>> different from ERRCODE_ADMIN_SHUTDOWN or ERRCODE_CRASH_SHUTDOWN,\n>> which are in that category.\n\n> Yeah, that's a good argument.\n\nGiven the lack of commentary on this thread, I'm guessing that people\naren't so excited about this topic that a change in the existing sqlstate\nassignment for ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT would fly.\nSo I propose to change the new ERRCODE_IDLE_SESSION_TIMEOUT to be in\nclass 57 and call it good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Jan 2021 15:58:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Terminate the idle sessions"
},
{
"msg_contents": "Dear Tom,\n\n> So I propose to change the new ERRCODE_IDLE_SESSION_TIMEOUT to be in\n> class 57 and call it good.\n\nI agreed your suggestion and I confirmed your commit.\nThanks!\n\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 13 Jan 2021 04:11:45 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Terminate the idle sessions"
}
] |
[
{
"msg_contents": "Hi,\n\nHere is a new version of index skip scan patch, based on v8 patch for\nUniqueKeys implementation from [1]. I want to start a new thread to\nsimplify navigation, hopefully I didn't forget anyone who actively\nparticipated in the discussion.\n\nTo simplify reviewing I've split it into several parts:\n\n* First two are taken from [1] just for the reference and to make cfbot happy.\n\n* Extend-UniqueKeys consists changes that needs to be done for\n UniqueKeys to be used in skip scan. Essentially this is a reduced\n version of previous implementation from Jesper & David, based on the\n new UniqueKeys infrastructure, as it does the very same thing.\n\n* Index-Skip-Scan contains not am specific code and the overall\n infrastructure, including configuration elements and so on.\n\n* Btree-implementation contains btree specific code to implement amskip,\n introduced in the previous patch.\n\n* The last one contains just documentation bits.\n\nInteresting enough with a new UniqueKey implementation skipping is\napplied in some tests unexpectedly. For those I've disabled\nindexskipscan to avoid confusion.\n\n[1]: https://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL%3DuaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw%40mail.gmail.com",
"msg_date": "Tue, 9 Jun 2020 12:22:47 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 6:20 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> Hi,\n>\n> Here is a new version of index skip scan patch, based on v8 patch for\n> UniqueKeys implementation from [1]. I want to start a new thread to\n> simplify navigation, hopefully I didn't forget anyone who actively\n> participated in the discussion.\n>\n> To simplify reviewing I've split it into several parts:\n>\n> * First two are taken from [1] just for the reference and to make cfbot\n> happy.\n>\n> * Extend-UniqueKeys consists changes that needs to be done for\n> UniqueKeys to be used in skip scan. Essentially this is a reduced\n> version of previous implementation from Jesper & David, based on the\n> new UniqueKeys infrastructure, as it does the very same thing.\n>\n> * Index-Skip-Scan contains not am specific code and the overall\n> infrastructure, including configuration elements and so on.\n>\n> * Btree-implementation contains btree specific code to implement amskip,\n> introduced in the previous patch.\n>\n> * The last one contains just documentation bits.\n>\n> Interesting enough with a new UniqueKey implementation skipping is\n> applied in some tests unexpectedly. For those I've disabled\n> indexskipscan to avoid confusion.\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL%3DuaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw%40mail.gmail.com\n>\n\nThanks for the patch.\n\nI just get the rough idea of patch, looks we have to narrow down the user\ncases\nwhere we can use this method. Consider the below example:\n\ncreate table j1(i int, im5 int, im100 int, im1000 int);\ninsert into j1 select i, i%5, i%100, i%1000 from generate_series(1,\n10000000)i;\ncreate index on j1(im5, i);\ninsert into j1 values (1, 1, 0, 0);\nanalyze j1;\n\ndemo=# select distinct * from j1 where i < 2;\n i | im5 | im100 | im1000\n---+-----+-------+--------\n 1 | 1 | 1 | 1\n(1 row)\n\ndemo=# set enable_indexskipscan to off;\nSET\ndemo=# select distinct * from j1 where i < 2;\n i | im5 | im100 | im1000\n---+-----+-------+--------\n 1 | 1 | 0 | 0\n 1 | 1 | 1 | 1\n(2 rows)\n\ndrop index j1_im5_i_idx;\n\ncreate index on j1(im5, i, im100);\ndemo=# select distinct im5, i, im100 from j1 where i < 2;\n im5 | i | im100\n-----+---+-------\n 1 | 1 | 0\n 1 | 1 | 1\n(2 rows)\ndemo=# set enable_indexskipscan to on;\nSET\ndemo=# select distinct im5, i, im100 from j1 where i < 2;\n im5 | i | im100\n-----+---+-------\n 1 | 1 | 0\n(1 row)\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Jun 9, 2020 at 6:20 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:Hi,\n\nHere is a new version of index skip scan patch, based on v8 patch for\nUniqueKeys implementation from [1]. I want to start a new thread to\nsimplify navigation, hopefully I didn't forget anyone who actively\nparticipated in the discussion.\n\nTo simplify reviewing I've split it into several parts:\n\n* First two are taken from [1] just for the reference and to make cfbot happy.\n\n* Extend-UniqueKeys consists changes that needs to be done for\n UniqueKeys to be used in skip scan. Essentially this is a reduced\n version of previous implementation from Jesper & David, based on the\n new UniqueKeys infrastructure, as it does the very same thing.\n\n* Index-Skip-Scan contains not am specific code and the overall\n infrastructure, including configuration elements and so on.\n\n* Btree-implementation contains btree specific code to implement amskip,\n introduced in the previous patch.\n\n* The last one contains just documentation bits.\n\nInteresting enough with a new UniqueKey implementation skipping is\napplied in some tests unexpectedly. For those I've disabled\nindexskipscan to avoid confusion.\n\n[1]: https://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL%3DuaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw%40mail.gmail.com\nThanks for the patch.I just get the rough idea of patch, looks we have to narrow down the user caseswhere we can use this method. Consider the below example:create table j1(i int, im5 int, im100 int, im1000 int);insert into j1 select i, i%5, i%100, i%1000 from generate_series(1, 10000000)i;create index on j1(im5, i);insert into j1 values (1, 1, 0, 0);analyze j1;demo=# select distinct * from j1 where i < 2; i | im5 | im100 | im1000---+-----+-------+-------- 1 | 1 | 1 | 1(1 row)demo=# set enable_indexskipscan to off;SETdemo=# select distinct * from j1 where i < 2; i | im5 | im100 | im1000---+-----+-------+-------- 1 | 1 | 0 | 0 1 | 1 | 1 | 1(2 rows)drop index j1_im5_i_idx;create index on j1(im5, i, im100);demo=# select distinct im5, i, im100 from j1 where i < 2; im5 | i | im100-----+---+------- 1 | 1 | 0 1 | 1 | 1(2 rows)demo=# set enable_indexskipscan to on;SETdemo=# select distinct im5, i, im100 from j1 where i < 2; im5 | i | im100-----+---+------- 1 | 1 | 0(1 row)-- Best RegardsAndy Fan",
"msg_date": "Thu, 11 Jun 2020 16:14:07 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Thu, Jun 11, 2020 at 04:14:07PM +0800, Andy Fan wrote:\n>\n> I just get the rough idea of patch, looks we have to narrow down the\n> user cases where we can use this method. Consider the below example:\n\nHi\n\nNot exactly narrow down, but rather get rid of wrong usage of skipping\nfor index scan. Since skipping for it was added later than for index\nonly scan I can imagine there are still blind spots, so good that you've\nlooked. In this particular case, when index expressions do not fully\ncover those expressionse result need to be distinct on, skipping just\ndoesn't have enough information and should not be used. I'll add it to\nthe next version, thanks!\n\n\n",
"msg_date": "Mon, 29 Jun 2020 14:07:09 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 3:20 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> * Btree-implementation contains btree specific code to implement amskip,\n> introduced in the previous patch.\n\nThe way that you're dealing with B-Tree tuples here needs to account\nfor posting list tuples:\n\n> + currItem = &so->currPos.items[so->currPos.lastItem];\n> + itup = (IndexTuple) (so->currTuples + currItem->tupleOffset);\n> + nextOffset = ItemPointerGetOffsetNumber(&itup->t_tid);\n\nBut I wonder more generally what the idea here is. The following\ncomments that immediately follow provide some hints:\n\n> + /*\n> + * To check if we returned the same tuple, try to find a\n> + * startItup on the current page. For that we need to update\n> + * scankey to match the whole tuple and set nextkey to return\n> + * an exact tuple, not the next one. If the nextOffset is the\n> + * same as before, it means we are in the loop, return offnum\n> + * to the original position and jump further\n> + */\n\nWhy does it make sense to use the offset number like this? It isn't\nstable or reliable. The patch goes on to do this:\n\n> + startOffset = _bt_binsrch(scan->indexRelation,\n> + so->skipScanKey,\n> + so->currPos.buf);\n> +\n> + page = BufferGetPage(so->currPos.buf);\n> + maxoff = PageGetMaxOffsetNumber(page);\n> +\n> + if (nextOffset <= startOffset)\n> + {\n\nWhy compare a heap TID's offset number (an offset number for a heap\npage) to another offset number for a B-Tree leaf page? They're\nfundamentally different things.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 8 Jul 2020 15:44:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "Hi Dmitry,\n\nAlso took another look at the patch now, and found a case of incorrect data. It looks related to the new way of creating the paths, as I can't recall seeing this in earlier versions.\n\ncreate table t1 as select a,b,b%5 as c, random() as d from generate_series(1, 10) a, generate_series(1,100) b;\ncreate index on t1 (a,b,c);\n\npostgres=# explain select distinct on (a) * from t1 order by a,b desc,c;\n QUERY PLAN \n-------------------------------------------------------------------------------\n Sort (cost=2.92..2.94 rows=10 width=20)\n Sort Key: a, b DESC, c\n -> Index Scan using t1_a_b_c_idx on t1 (cost=0.28..2.75 rows=10 width=20)\n Skip scan: true\n(4 rows)\n\n\nWith the 'order by a, b desc, c' we expect the value of column 'b' to always be 100. With index_skipscan enabled, it always gives 1 though. It's not correct that the planner chooses a skip scan followed by sort in this case.\n\n-Floris\n\n\n\n",
"msg_date": "Fri, 10 Jul 2020 17:03:37 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": false,
"msg_subject": "RE: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Wed, Jul 08, 2020 at 03:44:26PM -0700, Peter Geoghegan wrote:\n>\n> On Tue, Jun 9, 2020 at 3:20 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > * Btree-implementation contains btree specific code to implement amskip,\n> > introduced in the previous patch.\n>\n> The way that you're dealing with B-Tree tuples here needs to account\n> for posting list tuples:\n>\n> > + currItem = &so->currPos.items[so->currPos.lastItem];\n> > + itup = (IndexTuple) (so->currTuples + currItem->tupleOffset);\n> > + nextOffset = ItemPointerGetOffsetNumber(&itup->t_tid);\n\nDo you mean this last part with t_tid, which could also have a tid array\nin case of posting tuple format?\n\n> > + /*\n> > + * To check if we returned the same tuple, try to find a\n> > + * startItup on the current page. For that we need to update\n> > + * scankey to match the whole tuple and set nextkey to return\n> > + * an exact tuple, not the next one. If the nextOffset is the\n> > + * same as before, it means we are in the loop, return offnum\n> > + * to the original position and jump further\n> > + */\n>\n> Why does it make sense to use the offset number like this? It isn't\n> stable or reliable. The patch goes on to do this:\n>\n> > + startOffset = _bt_binsrch(scan->indexRelation,\n> > + so->skipScanKey,\n> > + so->currPos.buf);\n> > +\n> > + page = BufferGetPage(so->currPos.buf);\n> > + maxoff = PageGetMaxOffsetNumber(page);\n> > +\n> > + if (nextOffset <= startOffset)\n> > + {\n>\n> Why compare a heap TID's offset number (an offset number for a heap\n> page) to another offset number for a B-Tree leaf page? They're\n> fundamentally different things.\n\nWell, it's obviously wrong, thanks for noticing. What is necessary is to\ncompare two index tuples, the start and the next one, to test if they're\nthe same (in which case if I'm not mistaken probably we can compare item\npointers). I've got this question when I was about to post a new version\nwith changes to address feedback from Andy, now I'll combine them and\nsend a cumulative patch.\n\n\n",
"msg_date": "Sat, 11 Jul 2020 18:12:58 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Fri, Jul 10, 2020 at 05:03:37PM +0000, Floris Van Nee wrote:\n>\n> Also took another look at the patch now, and found a case of incorrect\n> data. It looks related to the new way of creating the paths, as I\n> can't recall seeing this in earlier versions.\n>\n> create table t1 as select a,b,b%5 as c, random() as d from generate_series(1, 10) a, generate_series(1,100) b;\n> create index on t1 (a,b,c);\n>\n> postgres=# explain select distinct on (a) * from t1 order by a,b desc,c;\n> QUERY PLAN\n> -------------------------------------------------------------------------------\n> Sort (cost=2.92..2.94 rows=10 width=20)\n> Sort Key: a, b DESC, c\n> -> Index Scan using t1_a_b_c_idx on t1 (cost=0.28..2.75 rows=10 width=20)\n> Skip scan: true\n> (4 rows)\n\nGood point, thanks for looking at this. With the latest planner version\nthere are indeed more possibilities to use skipping. It never occured to\nme that some of those paths will still rely on index scan returning full\ndata set. I'll look in details and add verification to prevent putting\nsomething like this on top of skip scan in the next version.\n\n\n",
"msg_date": "Sat, 11 Jul 2020 18:21:03 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> \n> Good point, thanks for looking at this. With the latest planner version there\n> are indeed more possibilities to use skipping. It never occured to me that\n> some of those paths will still rely on index scan returning full data set. I'll look\n> in details and add verification to prevent putting something like this on top of\n> skip scan in the next version.\n\nI believe the required changes are something like in attached patch. There were a few things I've changed:\n- build_uniquekeys was constructing the list incorrectly. For a DISTINCT a,b, it would create two unique keys, one with a and one with b. However, it should be one unique key with (a,b).\n- the uniquekeys that is built, still contains some redundant keys, that are normally eliminated from the path keys lists.\n- the distinct_pathkeys may be NULL, even though there's a possibility for skipping. But it wouldn't create the uniquekeys in this case. This makes the planner not choose skip scans even though it could. For example in queries that do SELECT DISTINCT ON (a) * FROM t1 WHERE a=1 ORDER BY a,b; Since a is constant, it's eliminated from regular pathkeys.\n- to combat the issues mentioned earlier, there's now a check in build_index_paths that checks if the query_pathkeys matches the useful_pathkeys. Note that we have to use the path keys here rather than any of the unique keys. The unique keys are only Expr nodes - they do not contain the necessary information about ordering. Due to elimination of some constant path keys, we have to search the attributes of the index to find the correct prefix to use in skipping.\n- creating the skip scan path did not actually fill the Path's unique keys. It should just set this to query_uniquekeys.\n\nI've attached the first two unique-keys patches (v9, 0001, 0002)), your patches, but rebased on v9 of unique keys (0003-0006) + a diff patch (0007) that applies my suggested changes on top of it.\n\n-Floris",
"msg_date": "Sun, 12 Jul 2020 12:48:47 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": false,
"msg_subject": "RE: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> \n> I've attached the first two unique-keys patches (v9, 0001, 0002)), your\n> patches, but rebased on v9 of unique keys (0003-0006) + a diff patch (0007)\n> that applies my suggested changes on top of it.\n>\n\nI just realized there's another thing that looks a bit strange too. From reading the thread, I thought it should be the case that in create_distinct_paths, it is checked whether the uniqueness in the unique_pathlist matches the uniqueness that is needed by the query. \nThis means that I think what it should be comparing is this:\n- The generated index path should have some path-level unique keys set\n- The path-level unique keys must be at least as strict as the path-level unique keys. Eg. if path-level is unique on (a), then query-level must be (a), or possibly (a,b).\n\nI've changed the patch to compare the path-level keys (set in create index path) with the query-level keys in create_distinct_path. Currently, I don't think the previous implementation was an actual issue leading to incorrect queries, but it would be causing problems if we tried to extend the uniqueness for distinct to join rels etc.\n\nOne question about the unique keys - probably for Andy or David: I've looked in the archives to find arguments for/against using Expr nodes or EquivalenceClasses in the Unique Keys patch. However, I couldn't really find a clear answer about why the current patch uses Expr rather than EquivalenceClasses. At some point David mentioned \"that probably Expr nodes were needed rather than EquivalenceClasses\", but it's not really clear to me why. What were the thoughts behind this?\n\n-Floris",
"msg_date": "Sun, 12 Jul 2020 22:18:26 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": false,
"msg_subject": "RE: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Sun, Jul 12, 2020 at 12:48:47PM +0000, Floris Van Nee wrote:\n> >\n> > Good point, thanks for looking at this. With the latest planner version there\n> > are indeed more possibilities to use skipping. It never occured to me that\n> > some of those paths will still rely on index scan returning full data set. I'll look\n> > in details and add verification to prevent putting something like this on top of\n> > skip scan in the next version.\n>\n> I believe the required changes are something like in attached patch. There were a few things I've changed:\n> - build_uniquekeys was constructing the list incorrectly. For a DISTINCT a,b, it would create two unique keys, one with a and one with b. However, it should be one unique key with (a,b).\n\nYes, I've also noticed that while preparing fix for index scan not\ncovered by index and included it.\n\n> - the uniquekeys that is built, still contains some redundant keys, that are normally eliminated from the path keys lists.\n\nI guess you're talking about:\n\n+ if (EC_MUST_BE_REDUNDANT(ec))\n+ continue;\n\nCan you add some test cases to your changes to show the effect of it? It\nseem to me redundant keys are already eliminated at this point by either\nmake_pathkeys_for_uniquekeys or even earlier for distinct on, but could\nbe I've missed something.\n\nAlong the lines I'm also curious about this part:\n\n-\tListCell *k;\n-\tList *exprs = NIL;\n-\n-\tforeach(k, ec->ec_members)\n-\t{\n-\t\tEquivalenceMember *mem = (EquivalenceMember *) lfirst(k);\n-\t\texprs = lappend(exprs, mem->em_expr);\n-\t}\n-\n-\tresult = lappend(result, makeUniqueKey(exprs, false, false));\n+\tEquivalenceMember *mem = (EquivalenceMember*) lfirst(list_head(ec->ec_members));\n\nI'm curious about this myself, maybe someone can clarify. It looks like\ngeneraly speaking there could be more than one member (if not\nec_has_volatile), which \"representing knowledge that multiple items are\neffectively equal\". Is this information is not interesting enough to\npreserve it in unique keys?\n\n> - the distinct_pathkeys may be NULL, even though there's a possibility for skipping. But it wouldn't create the uniquekeys in this case. This makes the planner not choose skip scans even though it could. For example in queries that do SELECT DISTINCT ON (a) * FROM t1 WHERE a=1 ORDER BY a,b; Since a is constant, it's eliminated from regular pathkeys.\n\nWhat would be the point of skipping if it's a constant?\n\n> - to combat the issues mentioned earlier, there's now a check in build_index_paths that checks if the query_pathkeys matches the useful_pathkeys. Note that we have to use the path keys here rather than any of the unique keys. The unique keys are only Expr nodes - they do not contain the necessary information about ordering. Due to elimination of some constant path keys, we have to search the attributes of the index to find the correct prefix to use in skipping.\n\nIIUC here you mean this function, right?\n\n+ prefix = find_index_prefix_for_pathkey(root,\n+ \t\t\t\t\t\t\t\t\t index,\n+ \t\t\t\t\t\t\t\t\t BackwardScanDirection,\n+ \t\t\t\t\t\t\t\t\t llast_node(PathKey,\n+ \t\t\t\t\t\t\t\t\t root->distinct_pathkeys));\n\nDoesn't it duplicate the job already done in build_index_pathkeys by\nbuilding those pathkeys again? If yes, probably it's possible to reuse\nuseful_pathkeys. Not sure about unordered indexes, but looks like\nquery_pathkeys should also match in this case.\n\nWill also look at the follow up questions in the next email.\n\n\n",
"msg_date": "Tue, 14 Jul 2020 18:18:52 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> \n> > - the uniquekeys that is built, still contains some redundant keys, that are\n> normally eliminated from the path keys lists.\n> \n> I guess you're talking about:\n> \n> + if (EC_MUST_BE_REDUNDANT(ec))\n> + continue;\n> \n> Can you add some test cases to your changes to show the effect of it? It\n> seem to me redundant keys are already eliminated at this point by either\n> make_pathkeys_for_uniquekeys or even earlier for distinct on, but could be\n> I've missed something.\n> \n\nThe build_uniquekeys function calls make_pathkeys_for_uniquekeys, which checks for uniqueness using pathkey_is_unique, but not for constantness. Consider a query like:\nSELECT DISTINCT ON (a,b) * FROM t1 WHERE a=1 ORDER BY a,b,c\n\nAll the regular path keys filter out 'a' for constantness here - so they would end up with a distinct pathkeys of (b) and sort path keys of (b,c).\nThe unique keys would end up with (a,b) though. In the previous patch, it'd compared in create_distinct_paths, the pathkeys to the unique keys, so it wouldn't consider the skip scan.\nDue to the other changes I made in create_distinct_paths/query_has_uniquekeys_for, it will choose a correct plan now, even without the EC_MUST_BE_REDUNDANT check though, so it's difficult to give an actual failing test case now. However, since all code filters out constant keys, I think uniqueness should do it too - otherwise you could get into problems later on. It's also more consistent. If you already know something is unique by just (b), it doesn't make sense to store that it's unique by (a,b). Now that I think of it, the best place to do this EC_MUST_BE_REDUNDANT check is probably inside make_pathkeys_for_uniquekeys, rather than build_uniquekeys though. It's probably good to move it there.\n\n> Along the lines I'm also curious about this part:\n> \n> -\tListCell *k;\n> -\tList *exprs = NIL;\n> -\n> -\tforeach(k, ec->ec_members)\n> -\t{\n> -\t\tEquivalenceMember *mem = (EquivalenceMember *)\n> lfirst(k);\n> -\t\texprs = lappend(exprs, mem->em_expr);\n> -\t}\n> -\n> -\tresult = lappend(result, makeUniqueKey(exprs, false, false));\n> +\tEquivalenceMember *mem = (EquivalenceMember*)\n> +lfirst(list_head(ec->ec_members));\n> \n> I'm curious about this myself, maybe someone can clarify. It looks like\n> generaly speaking there could be more than one member (if not\n> ec_has_volatile), which \"representing knowledge that multiple items are\n> effectively equal\". Is this information is not interesting enough to preserve it\n> in unique keys?\n\nYeah, that's a good question. Hence my question about the choice for Expr rather than EquivalenceClass for the Unique Keys patch to Andy/David. When storing just Expr, it is rather difficult to check equivalence in joins for example. Suppose, later on we decide to add join support to the distinct skip scan. Consider a query like this:\nSELECT DISTINCT t1.a FROM t1 JOIN t2 ON t1.a=t2.a\nAs far as my understanding goes (I didn't look into it in detail though), I think here the distinct_pathkey will have an EqClass {t1.a, t2.a}. That results in a UniqueKey with expr (t1.a) (because currently we only take the first Expr in the list to construct the UniqueKey). We could also construct *two* unique keys, one with Expr (t1.a) and one with Expr (t2.a), but I don't think that's the correct approach either, as it will explode when you have multiple pathkeys, each having multiple Expr inside their EqClasses.\nThat makes it difficult to check if we can perform the DISTINCT skip scan on table t2 as well (theoretically we could, but for that we need to check equivalence classes rather than expressions).\n\n> \n> > - the distinct_pathkeys may be NULL, even though there's a possibility for\n> skipping. But it wouldn't create the uniquekeys in this case. This makes the\n> planner not choose skip scans even though it could. For example in queries\n> that do SELECT DISTINCT ON (a) * FROM t1 WHERE a=1 ORDER BY a,b; Since a\n> is constant, it's eliminated from regular pathkeys.\n> \n> What would be the point of skipping if it's a constant?\n\nFor the query: SELECT DISTINCT ON (a) * FROM t1 WHERE a=1 ORDER BY a,b;\nThere may be 1000s of records with a=1. We're only interested in the first one though. The traditional non-skip approach would still scan all records with a=1. Skip would just fetch the first one with a=1 and then skip to the next prefix and stop the scan.\n\n> \n> > - to combat the issues mentioned earlier, there's now a check in\n> build_index_paths that checks if the query_pathkeys matches the\n> useful_pathkeys. Note that we have to use the path keys here rather than\n> any of the unique keys. The unique keys are only Expr nodes - they do not\n> contain the necessary information about ordering. Due to elimination of\n> some constant path keys, we have to search the attributes of the index to\n> find the correct prefix to use in skipping.\n> \n> IIUC here you mean this function, right?\n> \n> + prefix = find_index_prefix_for_pathkey(root,\n> +\n> index,\n> +\n> BackwardScanDirection,\n> +\n> llast_node(PathKey,\n> +\n> root->distinct_pathkeys));\n> \n> Doesn't it duplicate the job already done in build_index_pathkeys by building\n> those pathkeys again? If yes, probably it's possible to reuse useful_pathkeys.\n> Not sure about unordered indexes, but looks like query_pathkeys should\n> also match in this case.\n> \n\nYeah, there's definitely some double work there, but the actual impact may be limited - it doesn't actually allocate a new path key, but it looks it up in root->canon_pathkeys and returns that path key. \nI wrote it like this, because I couldn't find a way to identify from a certain PathKey the actual location in the index of that column. The constructed path keys list filters out all redundant path keys. An index on (a,a,b,a,b) becomes path keys (a,b). Now if we skip on (a,b) we actually need to use prefix=3. But how to get from PathKey=b to that number 3, I didn't find a solid way except doing this. Maybe there is though?\n\n-Floris\n\n\n\n",
"msg_date": "Tue, 14 Jul 2020 18:18:50 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": false,
"msg_subject": "RE: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Sat, Jul 11, 2020 at 9:10 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > > + currItem = &so->currPos.items[so->currPos.lastItem];\n> > > + itup = (IndexTuple) (so->currTuples + currItem->tupleOffset);\n> > > + nextOffset = ItemPointerGetOffsetNumber(&itup->t_tid);\n>\n> Do you mean this last part with t_tid, which could also have a tid array\n> in case of posting tuple format?\n\nYeah. There is a TID array at the end of the index when the tuple is a\nposting list tuple (as indicated by BTreeTupleIsPivot()). It isn't\nsafe to assume that t_tid is a heap TID for this reason, even in code\nthat only ever considers data items (that is, non high-key tuples AKA\nnon-pivot tuples) on a leaf page. (Though new/incoming tuples cannot\nbe posting list tuples either, so you'll see the assumption that t_tid\nis just a heap TID in parts of nbtinsert.c -- though only for the\nnew/incoming item.)\n\n> Well, it's obviously wrong, thanks for noticing. What is necessary is to\n> compare two index tuples, the start and the next one, to test if they're\n> the same (in which case if I'm not mistaken probably we can compare item\n> pointers). I've got this question when I was about to post a new version\n> with changes to address feedback from Andy, now I'll combine them and\n> send a cumulative patch.\n\nThis sounds like approximately the same problem as the one that\n_bt_killitems() has to deal with as of Postgres 13. This is handled in\na way that is admittedly pretty tricky, even though the code does not\nneed to be 100% certain that it's \"the same\" tuple. Deduplication kind\nof makes that a fuzzy concept. In principle there could be one big\nindex tuple instead of 5 tuples, even though the logical contents of\nthe page have not been changed between the time we recording heap TIDs\nin local and the time _bt_killitems() tried to match on those heap\nTIDs to kill_prior_tuple-kill some index tuples -- a concurrent\ndeduplication pass could do that. Your code needs to be prepared for\nstuff like that.\n\nUltimately posting list tuples are just a matter of understanding the\non-disk representation -- a \"Small Matter of Programming\". Even\nwithout deduplication there are potential hazards from the physical\ndeletion of LP_DEAD-marked tuples in _bt_vacuum_one_page() (which is\nnot code that runs in VACUUM, despite the name). Make sure that you\nhold a buffer pin on the leaf page throughout, because you need to do\nthat to make sure that VACUUM cannot concurrently recycle heap TIDs.\nIf VACUUM *is* able to concurrently recycle heap TIDs then it'll be\nsubtly broken. _bt_killitems() is safe because it either holds on to a\npin or gives up when the LSN changes at all. (ISTM that your only\nchoice is to hold on to a leaf page pin, since you cannot just decide\nto give up in the way that _bt_killitems() sometimes can.)\n\nNote that the rules surrounding buffer locks/pins for nbtree were\ntightened up a tiny bit today -- see commit 4a70f829. Also, it's no\nlonger okay to use raw LockBuffer() calls in nbtree, so you're going\nto have to fix that up when you next rebase -- you must use the new\n_bt_lockbuf() wrapper function instead, so that the new Valgrind\ninstrumentation is used. This shouldn't be hard.\n\nPerhaps you can use Valgrind to verify that this patch doesn't have\nany unsafe buffer accesses. I recall problems like that in earlier\nversions of the patch series. Valgrind should be able to detect most\nbugs like that (though see comments within _bt_lockbuf() for details\nof a bug in this area that Valgrind cannot detect even now).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 21 Jul 2020 16:35:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Tue, Jul 14, 2020 at 06:18:50PM +0000, Floris Van Nee wrote:\n>\n> Due to the other changes I made in create_distinct_paths/query_has_uniquekeys_for, it will choose a correct plan now, even without the EC_MUST_BE_REDUNDANT check though, so it's difficult to give an actual failing test case now. However, since all code filters out constant keys, I think uniqueness should do it too - otherwise you could get into problems later on. It's also more consistent. If you already know something is unique by just (b), it doesn't make sense to store that it's unique by (a,b). Now that I think of it, the best place to do this EC_MUST_BE_REDUNDANT check is probably inside make_pathkeys_for_uniquekeys, rather than build_uniquekeys though. It's probably good to move it there.\n\nThat would be my suggestion as well.\n\n> > Along the lines I'm also curious about this part:\n> >\n> > -\tListCell *k;\n> > -\tList *exprs = NIL;\n> > -\n> > -\tforeach(k, ec->ec_members)\n> > -\t{\n> > -\t\tEquivalenceMember *mem = (EquivalenceMember *)\n> > lfirst(k);\n> > -\t\texprs = lappend(exprs, mem->em_expr);\n> > -\t}\n> > -\n> > -\tresult = lappend(result, makeUniqueKey(exprs, false, false));\n> > +\tEquivalenceMember *mem = (EquivalenceMember*)\n> > +lfirst(list_head(ec->ec_members));\n> >\n> > I'm curious about this myself, maybe someone can clarify. It looks like\n> > generaly speaking there could be more than one member (if not\n> > ec_has_volatile), which \"representing knowledge that multiple items are\n> > effectively equal\". Is this information is not interesting enough to preserve it\n> > in unique keys?\n>\n> Yeah, that's a good question. Hence my question about the choice for Expr rather than EquivalenceClass for the Unique Keys patch to Andy/David. When storing just Expr, it is rather difficult to check equivalence in joins for example. Suppose, later on we decide to add join support to the distinct skip scan. Consider a query like this:\n> SELECT DISTINCT t1.a FROM t1 JOIN t2 ON t1.a=t2.a\n> As far as my understanding goes (I didn't look into it in detail though), I think here the distinct_pathkey will have an EqClass {t1.a, t2.a}. That results in a UniqueKey with expr (t1.a) (because currently we only take the first Expr in the list to construct the UniqueKey). We could also construct *two* unique keys, one with Expr (t1.a) and one with Expr (t2.a), but I don't think that's the correct approach either, as it will explode when you have multiple pathkeys, each having multiple Expr inside their EqClasses.\n\nOne UniqueKey can have multiple corresponding expressions, which gives\nus also possibility of having one unique key with (t1.a, t2.a) and it\nlooks now similar to EquivalenceClass.\n\n> > > - the distinct_pathkeys may be NULL, even though there's a possibility for\n> > skipping. But it wouldn't create the uniquekeys in this case. This makes the\n> > planner not choose skip scans even though it could. For example in queries\n> > that do SELECT DISTINCT ON (a) * FROM t1 WHERE a=1 ORDER BY a,b; Since a\n> > is constant, it's eliminated from regular pathkeys.\n> >\n> > What would be the point of skipping if it's a constant?\n>\n> For the query: SELECT DISTINCT ON (a) * FROM t1 WHERE a=1 ORDER BY a,b;\n> There may be 1000s of records with a=1. We're only interested in the first one though. The traditional non-skip approach would still scan all records with a=1. Skip would just fetch the first one with a=1 and then skip to the next prefix and stop the scan.\n\nThe idea behind this query sounds questionable to me, more transparent\nwould be to do this without distinct, skipping will actually do exactly\nthe same stuff just under another name. But if allowing skipping on\nconstants do not bring significant changes in the code probably it's\nfine.\n\n> > > - to combat the issues mentioned earlier, there's now a check in\n> > build_index_paths that checks if the query_pathkeys matches the\n> > useful_pathkeys. Note that we have to use the path keys here rather than\n> > any of the unique keys. The unique keys are only Expr nodes - they do not\n> > contain the necessary information about ordering. Due to elimination of\n> > some constant path keys, we have to search the attributes of the index to\n> > find the correct prefix to use in skipping.\n> >\n> > IIUC here you mean this function, right?\n> >\n> > + prefix = find_index_prefix_for_pathkey(root,\n> > +\n> > index,\n> > +\n> > BackwardScanDirection,\n> > +\n> > llast_node(PathKey,\n> > +\n> > root->distinct_pathkeys));\n> >\n> > Doesn't it duplicate the job already done in build_index_pathkeys by building\n> > those pathkeys again? If yes, probably it's possible to reuse useful_pathkeys.\n> > Not sure about unordered indexes, but looks like query_pathkeys should\n> > also match in this case.\n> >\n>\n> Yeah, there's definitely some double work there, but the actual impact may be limited - it doesn't actually allocate a new path key, but it looks it up in root->canon_pathkeys and returns that path key.\n> I wrote it like this, because I couldn't find a way to identify from a certain PathKey the actual location in the index of that column. The constructed path keys list filters out all redundant path keys. An index on (a,a,b,a,b) becomes path keys (a,b). Now if we skip on (a,b) we actually need to use prefix=3. But how to get from PathKey=b to that number 3, I didn't find a solid way except doing this. Maybe there is though?\n\nI don't think there is a direct way, but why not modify\nbuild_index_paths to also provide this information, or compare\nindex_pathkeys expressions with indextlist without actually create those\npathkeys again?\n\nAnd couple of words about this thread [1]. It looks to me like a strange\nway of interacting with the community. Are you going to duplicate there\neverything, or what are your plans? At the very least you could try to\ninclude everyone involved in the recipients list, not exclude some of\nthe authors.\n\n[1]: https://www.postgresql.org/message-id/flat/e4b623692a1447d4a13ac80fa271c8e6%40opammb0561.comp.optiver.com\n\n\n",
"msg_date": "Thu, 23 Jul 2020 11:53:51 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> \n> One UniqueKey can have multiple corresponding expressions, which gives us\n> also possibility of having one unique key with (t1.a, t2.a) and it looks now\n> similar to EquivalenceClass.\n> \n\nI believe the current definition of a unique key with two expressions (t1.a, t2.a) means that it's unique on the tuple (t1.a, t2.a) - this gives weaker guarantees than uniqueness on (t1.a) and uniqueness on (t2.a).\n\n> \n> The idea behind this query sounds questionable to me, more transparent\n> would be to do this without distinct, skipping will actually do exactly the same\n> stuff just under another name. But if allowing skipping on constants do not\n> bring significant changes in the code probably it's fine.\n> \n\nYeah indeed, I didn't say it's a query that people should generally write. :-) It's better to write as a regular SELECT with LIMIT 1 of course. However, it's more to be consistent and predictable to the user: if a SELECT DISTINCT ON (a) * FROM t1 runs fast, then it doesn't make sense to the user if a SELECT DISTINCT ON (a) * FROM t1 WHERE a=2 runs slow. And to support it also makes the implementation more consistent with little code changes.\n\n> >\n> > Yeah, there's definitely some double work there, but the actual impact may\n> be limited - it doesn't actually allocate a new path key, but it looks it up in\n> root->canon_pathkeys and returns that path key.\n> > I wrote it like this, because I couldn't find a way to identify from a certain\n> PathKey the actual location in the index of that column. The constructed path\n> keys list filters out all redundant path keys. An index on (a,a,b,a,b) becomes\n> path keys (a,b). Now if we skip on (a,b) we actually need to use prefix=3. But\n> how to get from PathKey=b to that number 3, I didn't find a solid way except\n> doing this. Maybe there is though?\n> \n> I don't think there is a direct way, but why not modify build_index_paths to\n> also provide this information, or compare index_pathkeys expressions with\n> indextlist without actually create those pathkeys again?\n> \n\nI agree there could be other ways - I don't currently have a strong preference for either. I can have a look at this later.\n\n> And couple of words about this thread [1]. It looks to me like a strange way\n> of interacting with the community. Are you going to duplicate there\n> everything, or what are your plans? At the very least you could try to include\n> everyone involved in the recipients list, not exclude some of the authors.\n> \n\nWhen I wrote the first mail in the thread, I went to this thread [1] and included everyone from there, but I see now that I only included the to: and cc: people and forgot the original thread author, you. I'm sorry about that - I should've looked better to make sure I had everyone.\nIn any case, my plan is to keep the patch at least applicable to master, as I believe it can be helpful for discussions about both patches.\n\n[1] https://www.postgresql.org/message-id/20200609102247.jdlatmfyeecg52fi%40localhost\n\n\n",
"msg_date": "Thu, 23 Jul 2020 11:43:56 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": false,
"msg_subject": "RE: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Tue, Jul 21, 2020 at 04:35:55PM -0700, Peter Geoghegan wrote:\n>\n> > Well, it's obviously wrong, thanks for noticing. What is necessary is to\n> > compare two index tuples, the start and the next one, to test if they're\n> > the same (in which case if I'm not mistaken probably we can compare item\n> > pointers). I've got this question when I was about to post a new version\n> > with changes to address feedback from Andy, now I'll combine them and\n> > send a cumulative patch.\n>\n> This sounds like approximately the same problem as the one that\n> _bt_killitems() has to deal with as of Postgres 13. This is handled in\n> a way that is admittedly pretty tricky, even though the code does not\n> need to be 100% certain that it's \"the same\" tuple. Deduplication kind\n> of makes that a fuzzy concept. In principle there could be one big\n> index tuple instead of 5 tuples, even though the logical contents of\n> the page have not been changed between the time we recording heap TIDs\n> in local and the time _bt_killitems() tried to match on those heap\n> TIDs to kill_prior_tuple-kill some index tuples -- a concurrent\n> deduplication pass could do that. Your code needs to be prepared for\n> stuff like that.\n>\n> Ultimately posting list tuples are just a matter of understanding the\n> on-disk representation -- a \"Small Matter of Programming\". Even\n> without deduplication there are potential hazards from the physical\n> deletion of LP_DEAD-marked tuples in _bt_vacuum_one_page() (which is\n> not code that runs in VACUUM, despite the name). Make sure that you\n> hold a buffer pin on the leaf page throughout, because you need to do\n> that to make sure that VACUUM cannot concurrently recycle heap TIDs.\n> If VACUUM *is* able to concurrently recycle heap TIDs then it'll be\n> subtly broken. _bt_killitems() is safe because it either holds on to a\n> pin or gives up when the LSN changes at all. (ISTM that your only\n> choice is to hold on to a leaf page pin, since you cannot just decide\n> to give up in the way that _bt_killitems() sometimes can.)\n\nI see, thanks for clarification. You're right, in this part of\nimplementation there is no way to give up if LSN changes like\n_bt_killitems does. As far as I can see the leaf page is already pinned\nall the time between reading relevant tuples and comparing them, I only\nneed to handle posting list tuples.\n\n\n",
"msg_date": "Mon, 27 Jul 2020 12:24:31 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Mon, 13 Jul 2020 at 10:18, Floris Van Nee <florisvannee@optiver.com> wrote:\n> One question about the unique keys - probably for Andy or David: I've looked in the archives to find arguments for/against using Expr nodes or EquivalenceClasses in the Unique Keys patch. However, I couldn't really find a clear answer about why the current patch uses Expr rather than EquivalenceClasses. At some point David mentioned \"that probably Expr nodes were needed rather than EquivalenceClasses\", but it's not really clear to me why. What were the thoughts behind this?\n\nI'm still not quite sure on this either way. I did think\nEquivalenceClasses were more suitable before I wrote the POC patch for\nunique keys. But after that, I had in mind that Exprs might be\nbetter. The reason I thought this was due to the fact that the\nDISTINCT clause list is a bunch of Exprs and if the UniqueKeys were\nEquivalenceClasses then checking to see if the DISTINCT can be skipped\nturned into something more complex that required looking through lists\nof ec_members rather than just checking if the uniquekey exprs were a\nsubset of the DISTINCT clause.\n\nThinking about it a bit harder, if we did use Exprs then it would mean\nit a case like the following wouldn't work for Andy's DISTINCT no-op\nstuff.\n\nCREATE TABLE xy (x int primary key, y int not null);\n\nSELECT DISTINCT y FROM xy WHERE x=y;\n\nwhereas if we use EquivalenceClasses then we'll find that we have an\nEC with x,y in it and can skip the DISTINCT since we have a UniqueKey\ncontaining that EquivalenceClass.\n\nAlso, looking at what Andy wrote to make a case like the following\nwork in his populate_baserel_uniquekeys() function in the 0002 patch:\n\nCREATE TABLE ab (a int, b int, primary key(a,b));\nSELECT DISTINCT a FROM ab WHERE b = 1;\n\nit's a bit uninspiring. Really what we want here when checking if we\ncan skip doing the DISTINCT is a UniqueKey set using\nEquivalenceClasses as we can just insist that any unmatched UniqueKey\nitems have an ec_is_const == true. However, that means we have to loop\nthrough the ec_members of the EquivalenceClasses in the uniquekeys\nduring the DISTINCT check. That's particularly bad when you consider\nthat in a partitioned table case there might be an ec_member for each\nchild partition and there could be 1000s of child partitions and\nfollowing those ec_members chains is going to be too slow.\n\nMy current thoughts are that we should be using EquivalenceClasses but\nwe should first add some infrastructure to make them perform better.\nMy current thoughts are that we do something like what I mentioned in\n[1] or something more like what Andres mentions in [2]. After that,\nwe could either make EquivalenceClass.ec_members a hash table or\nbinary search tree. Or even perhaps just have a single hash table/BST\nfor all EquivalenceClasses that allows very fast lookups from {Expr}\n-> {EquivalenceClass}. I think an Expr can only belong in a single\nnon-merged EquivalenceClass. So when we do merging of\nEquivalenceClasses we could just repoint that data structure to point\nto the new EquivalenceClass. We'd never point to ones that have\nec_merged != NULL. This would also allow us to fix the poor\nperformance in regards to get_eclass_for_sort_expr() for partitioned\ntables.\n\nSo, it seems the patch dependency chain for skip scans just got a bit longer :-(\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrEXcadNYAAdq6RO0eKZUG6rRHXJGAbpzj8y432gCD9bA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/20190920051857.2fhnvhvx4qdddviz%40alap3.anarazel.de#c3add3919c534591eae2179a6c82742c\n\n\n",
"msg_date": "Fri, 31 Jul 2020 15:06:44 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "Hi David:\n\nThanks for looking into this.\n\nOn Fri, Jul 31, 2020 at 11:07 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 13 Jul 2020 at 10:18, Floris Van Nee <florisvannee@optiver.com>\n> wrote:\n> > One question about the unique keys - probably for Andy or David: I've\n> looked in the archives to find arguments for/against using Expr nodes or\n> EquivalenceClasses in the Unique Keys patch. However, I couldn't really\n> find a clear answer about why the current patch uses Expr rather than\n> EquivalenceClasses. At some point David mentioned \"that probably Expr nodes\n> were needed rather than EquivalenceClasses\", but it's not really clear to\n> me why. What were the thoughts behind this?\n>\n> I'm still not quite sure on this either way. I did think\n> EquivalenceClasses were more suitable before I wrote the POC patch for\n> unique keys. But after that, I had in mind that Exprs might be\n> better. The reason I thought this was due to the fact that the\n> DISTINCT clause list is a bunch of Exprs and if the UniqueKeys were\n> EquivalenceClasses then checking to see if the DISTINCT can be skipped\n> turned into something more complex that required looking through lists\n> of ec_members rather than just checking if the uniquekey exprs were a\n> subset of the DISTINCT clause.\n>\n\n> Thinking about it a bit harder, if we did use Exprs then it would mean\n> it a case like the following wouldn't work for Andy's DISTINCT no-op\n> stuff.\n>\n> CREATE TABLE xy (x int primary key, y int not null);\n>\n> SELECT DISTINCT y FROM xy WHERE x=y;\n>\n> whereas if we use EquivalenceClasses then we'll find that we have an\n> EC with x,y in it and can skip the DISTINCT since we have a UniqueKey\n> containing that EquivalenceClass.\n\n\n> Also, looking at what Andy wrote to make a case like the following\n> work in his populate_baserel_uniquekeys() function in the 0002 patch:\n>\n> CREATE TABLE ab (a int, b int, primary key(a,b));\n> SELECT DISTINCT a FROM ab WHERE b = 1;\n>\n> it's a bit uninspiring. Really what we want here when checking if we\n> can skip doing the DISTINCT is a UniqueKey set using\n> EquivalenceClasses as we can just insist that any unmatched UniqueKey\n> items have an ec_is_const == true. However, that means we have to loop\n> through the ec_members of the EquivalenceClasses in the uniquekeys\n> during the DISTINCT check. That's particularly bad when you consider\n> that in a partitioned table case there might be an ec_member for each\n> child partition and there could be 1000s of child partitions and\n> following those ec_members chains is going to be too slow.\n>\n> My current thoughts are that we should be using EquivalenceClasses but\n> we should first add some infrastructure to make them perform better.\n> My current thoughts are that we do something like what I mentioned in\n> [1] or something more like what Andres mentions in [2]. After that,\n> we could either make EquivalenceClass.ec_members a hash table or\n> binary search tree. Or even perhaps just have a single hash table/BST\n> for all EquivalenceClasses that allows very fast lookups from {Expr}\n> -> {EquivalenceClass}. I think an Expr can only belong in a single\n> non-merged EquivalenceClass. So when we do merging of\n> EquivalenceClasses we could just repoint that data structure to point\n> to the new EquivalenceClass. We'd never point to ones that have\n> ec_merged != NULL. This would also allow us to fix the poor\n> performance in regards to get_eclass_for_sort_expr() for partitioned\n> tables.\n>\n> So, it seems the patch dependency chain for skip scans just got a bit\n> longer :-(\n>\n>\nI admit that EquivalenceClasses has a better expressive power. There are\n2 more\ncases we can handle better with EquivalenceClasses. SELECT DISTINCT a, b,\nc\nFROM t WHERE a = b; Currently the UniqueKey is (a, b, c), but it is better\nbe (a, c)\nand (b, c). The other case happens similarly in group by case.\n\nAfter realizing this, I am still hesitant to do that, due to the\ncomplexity. If we do that,\nwe may have to maintain a EquivalenceClasses in one more place or make the\nexisting\nEquivalenceClasses List longer, for example: SELECT pk FROM t; The\ncurrent\ninfrastructure doesn't create any EquivalenceClasses for pk. So we have to\ncreate\na new one in this case and reuse some existing ones in other cases.\nFinally since the\nEquivalenceClasses is not so straight to upper user, we have to depend on\nthe\ninfrastructure change to look up an EquivalenceClasses quickly from an\nExpr.\n\nI rethink more about the case you provide above, IIUC, there is such issue\nfor joinrel.\nthen we can just add a EC checking for populate_baserel_uniquekeys. As for\nthe\nDISTINCT/GROUP BY case, we should build the UniqueKeys from\nroot->distinct_pathkeys\nand root->group_pathkeys where the EquivalenceClasses are already there.\n\nI am still not insisting on either Expr or EquivalenceClasses right now,\nif we need to\nchange it to EquivalenceClasses, I'd see if we need to have more places to\ntake\ncare before doing that.\n\n-- \nBest Regards\nAndy Fan\n\nHi David:Thanks for looking into this. On Fri, Jul 31, 2020 at 11:07 AM David Rowley <dgrowleyml@gmail.com> wrote:On Mon, 13 Jul 2020 at 10:18, Floris Van Nee <florisvannee@optiver.com> wrote:\n> One question about the unique keys - probably for Andy or David: I've looked in the archives to find arguments for/against using Expr nodes or EquivalenceClasses in the Unique Keys patch. However, I couldn't really find a clear answer about why the current patch uses Expr rather than EquivalenceClasses. At some point David mentioned \"that probably Expr nodes were needed rather than EquivalenceClasses\", but it's not really clear to me why. What were the thoughts behind this?\n\nI'm still not quite sure on this either way. I did think\nEquivalenceClasses were more suitable before I wrote the POC patch for\nunique keys. But after that, I had in mind that Exprs might be\nbetter. The reason I thought this was due to the fact that the\nDISTINCT clause list is a bunch of Exprs and if the UniqueKeys were\nEquivalenceClasses then checking to see if the DISTINCT can be skipped\nturned into something more complex that required looking through lists\nof ec_members rather than just checking if the uniquekey exprs were a\nsubset of the DISTINCT clause.\nThinking about it a bit harder, if we did use Exprs then it would mean\nit a case like the following wouldn't work for Andy's DISTINCT no-op\nstuff.\n\nCREATE TABLE xy (x int primary key, y int not null);\n\nSELECT DISTINCT y FROM xy WHERE x=y;\n\nwhereas if we use EquivalenceClasses then we'll find that we have an\nEC with x,y in it and can skip the DISTINCT since we have a UniqueKey\ncontaining that EquivalenceClass. \n\nAlso, looking at what Andy wrote to make a case like the following\nwork in his populate_baserel_uniquekeys() function in the 0002 patch:\n\nCREATE TABLE ab (a int, b int, primary key(a,b));\nSELECT DISTINCT a FROM ab WHERE b = 1;\n\nit's a bit uninspiring. Really what we want here when checking if we\ncan skip doing the DISTINCT is a UniqueKey set using\nEquivalenceClasses as we can just insist that any unmatched UniqueKey\nitems have an ec_is_const == true. However, that means we have to loop\nthrough the ec_members of the EquivalenceClasses in the uniquekeys\nduring the DISTINCT check. That's particularly bad when you consider\nthat in a partitioned table case there might be an ec_member for each\nchild partition and there could be 1000s of child partitions and\nfollowing those ec_members chains is going to be too slow.\n\nMy current thoughts are that we should be using EquivalenceClasses but\nwe should first add some infrastructure to make them perform better.\nMy current thoughts are that we do something like what I mentioned in\n[1] or something more like what Andres mentions in [2]. After that,\nwe could either make EquivalenceClass.ec_members a hash table or\nbinary search tree. Or even perhaps just have a single hash table/BST\nfor all EquivalenceClasses that allows very fast lookups from {Expr}\n-> {EquivalenceClass}. I think an Expr can only belong in a single\nnon-merged EquivalenceClass. So when we do merging of\nEquivalenceClasses we could just repoint that data structure to point\nto the new EquivalenceClass. We'd never point to ones that have\nec_merged != NULL. This would also allow us to fix the poor\nperformance in regards to get_eclass_for_sort_expr() for partitioned\ntables.\n\nSo, it seems the patch dependency chain for skip scans just got a bit longer :-(\nI admit that EquivalenceClasses has a better expressive power. There are 2 more cases we can handle better with EquivalenceClasses. SELECT DISTINCT a, b, c FROM t WHERE a = b; Currently the UniqueKey is (a, b, c), but it is better be (a, c) and (b, c). The other case happens similarly in group by case. After realizing this, I am still hesitant to do that, due to the complexity. If we do that,we may have to maintain a EquivalenceClasses in one more place or make the existingEquivalenceClasses List longer, for example: SELECT pk FROM t; The current infrastructure doesn't create any EquivalenceClasses for pk. So we have to createa new one in this case and reuse some existing ones in other cases. Finally since theEquivalenceClasses is not so straight to upper user, we have to depend on the infrastructure change to look up an EquivalenceClasses quickly from an Expr. I rethink more about the case you provide above, IIUC, there is such issue for joinrel. then we can just add a EC checking for populate_baserel_uniquekeys. As for the DISTINCT/GROUP BY case, we should build the UniqueKeys from root->distinct_pathkeysand root->group_pathkeys where the EquivalenceClasses are already there. I am still not insisting on either Expr or EquivalenceClasses right now, if we need tochange it to EquivalenceClasses, I'd see if we need to have more places to takecare before doing that. -- Best RegardsAndy Fan",
"msg_date": "Sun, 2 Aug 2020 18:36:27 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Mon, Jul 27, 2020 at 12:24:31PM +0200, Dmitry Dolgov wrote:\n>\n> I see, thanks for clarification. You're right, in this part of\n> implementation there is no way to give up if LSN changes like\n> _bt_killitems does. As far as I can see the leaf page is already pinned\n> all the time between reading relevant tuples and comparing them, I only\n> need to handle posting list tuples.\n\nHere is a new version that hopefully address most of the concerns\nmentioned in this thread so far. As before, first two patches are taken\nfrom UniqueKeys thread and attached only for the reference. List of\nchanges includes:\n\n* fix for index scan not being fully covered\n* rebase on the latest UniqueKey patch\n* taking into account posting tuples (although I must say I couldn't\n produce a test that will hit this part, so I would appreciate if\n someone can take a look)\n* fixes suggested by Floris with adjustments as discussed in the thread\n\nThere are no changes related to EquivalenceClasses vs expressions, which\nwould probably be my next target. Having this in mind I must admit I'm\nnot super excited about possibility of including another patch as a\ndependency without clear prospects and plans for it.\n\nThanks for the feedback folks!",
"msg_date": "Sat, 15 Aug 2020 16:12:40 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Sat, Aug 15, 2020 at 7:09 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Here is a new version that hopefully address most of the concerns\n> mentioned in this thread so far. As before, first two patches are taken\n> from UniqueKeys thread and attached only for the reference. List of\n> changes includes:\n\nSome thoughts on this version of the patch series (I'm focussing on\nv36-0005-Btree-implementation-of-skipping.patch again):\n\n* I see the following compiler warning:\n\n/code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:\nIn function ‘populate_baserel_uniquekeys’:\n/code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:797:13:\nwarning: ‘expr’ may be used uninitialized in this function\n[-Wmaybe-uninitialized]\n 797 | else if (!list_member(unique_index->rel->reltarget->exprs, expr))\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* Perhaps the warning is related to this nearby code that I noticed\nValgrind complains about:\n\n==1083468== VALGRINDERROR-BEGIN\n==1083468== Invalid read of size 4\n==1083468== at 0x59568A: get_exprs_from_uniqueindex (uniquekeys.c:771)\n==1083468== by 0x593C5B: populate_baserel_uniquekeys (uniquekeys.c:140)\n==1083468== by 0x56AEA5: set_plain_rel_size (allpaths.c:586)\n==1083468== by 0x56AADB: set_rel_size (allpaths.c:412)\n==1083468== by 0x56A8CD: set_base_rel_sizes (allpaths.c:323)\n==1083468== by 0x56A5A7: make_one_rel (allpaths.c:185)\n==1083468== by 0x5AB426: query_planner (planmain.c:269)\n==1083468== by 0x5AF02C: grouping_planner (planner.c:2058)\n==1083468== by 0x5AD202: subquery_planner (planner.c:1015)\n==1083468== by 0x5ABABF: standard_planner (planner.c:405)\n==1083468== by 0x5AB7F8: planner (planner.c:275)\n==1083468== by 0x6E6F84: pg_plan_query (postgres.c:875)\n==1083468== by 0x6E70C4: pg_plan_queries (postgres.c:966)\n==1083468== by 0x6E7497: exec_simple_query (postgres.c:1158)\n==1083468== by 0x6EBCD3: PostgresMain (postgres.c:4309)\n==1083468== by 0x624284: BackendRun (postmaster.c:4541)\n==1083468== by 0x623995: BackendStartup (postmaster.c:4225)\n==1083468== by 0x61FB70: ServerLoop (postmaster.c:1742)\n==1083468== by 0x61F309: PostmasterMain (postmaster.c:1415)\n==1083468== by 0x514AF2: main (main.c:209)\n==1083468== Address 0x75f13e0 is 4,448 bytes inside a block of size\n8,192 alloc'd\n==1083468== at 0x483B7F3: malloc (in\n/usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n==1083468== by 0x8C15C8: AllocSetAlloc (aset.c:919)\n==1083468== by 0x8CEA52: palloc (mcxt.c:964)\n==1083468== by 0x267F25: systable_beginscan (genam.c:373)\n==1083468== by 0x8682CE: SearchCatCacheMiss (catcache.c:1359)\n==1083468== by 0x868167: SearchCatCacheInternal (catcache.c:1299)\n==1083468== by 0x867E2C: SearchCatCache1 (catcache.c:1167)\n==1083468== by 0x8860B2: SearchSysCache1 (syscache.c:1123)\n==1083468== by 0x8BD482: check_enable_rls (rls.c:66)\n==1083468== by 0x68A113: get_row_security_policies (rowsecurity.c:134)\n==1083468== by 0x683C2C: fireRIRrules (rewriteHandler.c:2045)\n==1083468== by 0x687340: QueryRewrite (rewriteHandler.c:3962)\n==1083468== by 0x6E6EB1: pg_rewrite_query (postgres.c:784)\n==1083468== by 0x6E6D23: pg_analyze_and_rewrite (postgres.c:700)\n==1083468== by 0x6E7476: exec_simple_query (postgres.c:1155)\n==1083468== by 0x6EBCD3: PostgresMain (postgres.c:4309)\n==1083468== by 0x624284: BackendRun (postmaster.c:4541)\n==1083468== by 0x623995: BackendStartup (postmaster.c:4225)\n==1083468== by 0x61FB70: ServerLoop (postmaster.c:1742)\n==1083468== by 0x61F309: PostmasterMain (postmaster.c:1415)\n==1083468==\n==1083468== VALGRINDERROR-END\n\n(You'll see the same error if you run Postgres Valgrind + \"make\ninstallcheck\", though I don't think that the queries in question are\ntests that you yourself wrote.)\n\n* IndexScanDescData.xs_itup comments could stand to be updated here --\nIndexScanDescData.xs_want_itup is no longer just about index-only\nscans.\n\n* Do we really need the AM-level boolean flag/argument named\n\"scanstart\"? Why not just follow the example of btgettuple(), which\ndetermines whether or not the scan has been initialized based on the\ncurrent scan position?\n\nJust because you set so->currPos.buf to InvalidBuffer doesn't mean you\ncannot or should not take the same approach as btgettuple(). And even\nif you can't take exactly the same approach, I would still think that\nthe scan's opaque B-Tree state should remember if it's the first call\nto _bt_skip() (rather than some subsequent call) in some other way\n(e.g. carrying a \"scanstart\" bool flag directly).\n\nA part of my objection to \"scanstart\" is that it seems to require that\nmuch of the code within _bt_skip() get another level of\nindentation...which makes it even more difficult to follow.\n\n* I don't understand what _bt_scankey_within_page() comments mean when\nthey refer to \"the page highkey\". It looks like this function examines\nthe highest data item on the page, not the high key.\n\nIt is highly confusing to refer to a tuple as the page high key if it\nisn't the tuple from the P_HIKEY offset number on a non-rightmost\npage, which is a pivot tuple even on the leaf level (as indicated by\nBTreeTupleIsPivot()).\n\n* Why does _bt_scankey_within_page() have an unused \"ScanDirection\ndir\" argument?\n\n* Why is it okay to do anything important based on the\n_bt_scankey_within_page() return value?\n\nIf the page is empty, then how can we know that it's okay to go to the\nnext value? I'm concerned that there could be subtle bugs in this\narea. VACUUM will usually just delete the empty page. But it won't\nalways do so, for a variety of reasons that aren't worth going into\nnow. This could mask bugs in this area. I'm concerned about patterns\nlike this one from _bt_skip():\n\n while (!nextFound)\n {\n ....\n\n if (_bt_scankey_within_page(scan, so->skipScanKey,\n so->currPos.buf, dir))\n {\n ...\n }\n else\n /*\n * If startItup could be not found within the current page,\n * assume we found something new\n */\n nextFound = true;\n ....\n }\n\nWhy would you assume that \"we found something new\" here? In general I\njust don't understand the design of _bt_skip(). I get the basic idea\nof what you're trying to do, but it could really use better comments.\n\n*The \"jump one more time if it's the same as at the beginning\" thing\nseems scary to me. Maybe you should be doing something with the actual\nhigh key here.\n\n* Tip: You might find cases involving \"empty but not yet deleted\"\npages a bit easier to test by temporarily disabling page deletion. You\ncan modify nbtree.c to look like this:\n\nindex a1ad22f785..db977a0300 100644\n--- a/src/backend/access/nbtree/nbtree.c\n+++ b/src/backend/access/nbtree/nbtree.c\n@@ -1416,6 +1416,7 @@ backtrack:\n Assert(!attempt_pagedel || nhtidslive == 0);\n }\n\n+ attempt_pagedel = false;\n if (attempt_pagedel)\n {\n MemoryContext oldcontext;\n\nThat's all I have for now.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 21 Sep 2020 17:59:32 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Mon, Sep 21, 2020 at 5:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> That's all I have for now.\n\nOne more thing. I don't think that this should be a bitwise AND:\n\n if ((offnum > maxoff) & (so->currPos.nextPage == P_NONE))\n {\n ....\n }\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 21 Sep 2020 20:23:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Mon, Sep 21, 2020 at 05:59:32PM -0700, Peter Geoghegan wrote:\n>\n> * I see the following compiler warning:\n>\n> /code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:\n> In function ‘populate_baserel_uniquekeys’:\n> /code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:797:13:\n> warning: ‘expr’ may be used uninitialized in this function\n> [-Wmaybe-uninitialized]\n> 797 | else if (!list_member(unique_index->rel->reltarget->exprs, expr))\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThis is mostly for UniqueKeys patch, which is attached here only as a\ndependency, but I'll prepare changes for that. Interesting enough I\ncan't reproduce this warning, but if I understand correctly gcc has some\nhistory of spurious uninitialized warnings, so I guess it could be\nversion dependent.\n\n> * Perhaps the warning is related to this nearby code that I noticed\n> Valgrind complains about:\n>\n> ==1083468== VALGRINDERROR-BEGIN\n> ==1083468== Invalid read of size 4\n> ==1083468== at 0x59568A: get_exprs_from_uniqueindex (uniquekeys.c:771)\n> ==1083468== by 0x593C5B: populate_baserel_uniquekeys (uniquekeys.c:140)\n\nThis also belongs to UniqueKeys patch, but at least I can reproduce this\none. My guess is that nkeycolums should be used there, not ncolums,\nwhich is visible in index_incuding tests. The same as previous one, will\nprepare corresponding changes.\n\n> * Do we really need the AM-level boolean flag/argument named\n> \"scanstart\"? Why not just follow the example of btgettuple(), which\n> determines whether or not the scan has been initialized based on the\n> current scan position?\n>\n> Just because you set so->currPos.buf to InvalidBuffer doesn't mean you\n> cannot or should not take the same approach as btgettuple(). And even\n> if you can't take exactly the same approach, I would still think that\n> the scan's opaque B-Tree state should remember if it's the first call\n> to _bt_skip() (rather than some subsequent call) in some other way\n> (e.g. carrying a \"scanstart\" bool flag directly).\n\nYes, agree, carrying this flag inside the opaque state would be better.\n\n> * Why is it okay to do anything important based on the\n> _bt_scankey_within_page() return value?\n>\n> If the page is empty, then how can we know that it's okay to go to the\n> next value? I'm concerned that there could be subtle bugs in this\n> area. VACUUM will usually just delete the empty page. But it won't\n> always do so, for a variety of reasons that aren't worth going into\n> now. This could mask bugs in this area. I'm concerned about patterns\n> like this one from _bt_skip():\n>\n> while (!nextFound)\n> {\n> ....\n>\n> if (_bt_scankey_within_page(scan, so->skipScanKey,\n> so->currPos.buf, dir))\n> {\n> ...\n> }\n> else\n> /*\n> * If startItup could be not found within the current page,\n> * assume we found something new\n> */\n> nextFound = true;\n> ....\n> }\n>\n> Why would you assume that \"we found something new\" here? In general I\n> just don't understand the design of _bt_skip(). I get the basic idea\n> of what you're trying to do, but it could really use better comments.\n\nYeah, I'll put more efforts into clear comments. There are two different\nways in which _bt_scankey_within_page is being used.\n\nThe first one is to check if it's possible to skip traversal of the tree\nfrom root in case if what we're looking for could be on the current\npage. In this case an empty page would mean we need to search from the\nroot, so not sure what could be the issue here?\n\nThe second one (that you've highlighted above) I admit is probably the\nmost questionable part of the patch and open for suggestions how to\nimprove it. It's required for one particular case with a cursor when\nscan advances forward but reads backward. What could happen here is we\nfound one valid item, but the next one e.g. do not pass scan key\nconditions, and we end up with the previous item again. I'm not entirely\nsure how presence of an empty page could change this scenario, could you\nplease show an example?\n\n> *The \"jump one more time if it's the same as at the beginning\" thing\n> seems scary to me. Maybe you should be doing something with the actual\n> high key here.\n\nSame as for the previous question, can you give a hint what do you mean\nby \"doing something with the actual high key\"?\n\n\n",
"msg_date": "Tue, 6 Oct 2020 17:20:39 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Tue, Oct 06, 2020 at 05:20:39PM +0200, Dmitry Dolgov wrote:\n> > On Mon, Sep 21, 2020 at 05:59:32PM -0700, Peter Geoghegan wrote:\n> >\n> > * I see the following compiler warning:\n> >\n> > /code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:\n> > In function ‘populate_baserel_uniquekeys’:\n> > /code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:797:13:\n> > warning: ‘expr’ may be used uninitialized in this function\n> > [-Wmaybe-uninitialized]\n> > 797 | else if (!list_member(unique_index->rel->reltarget->exprs, expr))\n> > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\n> This is mostly for UniqueKeys patch, which is attached here only as a\n> dependency, but I'll prepare changes for that. Interesting enough I\n> can't reproduce this warning, but if I understand correctly gcc has some\n> history of spurious uninitialized warnings, so I guess it could be\n> version dependent.\n>\n> > * Perhaps the warning is related to this nearby code that I noticed\n> > Valgrind complains about:\n> >\n> > ==1083468== VALGRINDERROR-BEGIN\n> > ==1083468== Invalid read of size 4\n> > ==1083468== at 0x59568A: get_exprs_from_uniqueindex (uniquekeys.c:771)\n> > ==1083468== by 0x593C5B: populate_baserel_uniquekeys (uniquekeys.c:140)\n>\n> This also belongs to UniqueKeys patch, but at least I can reproduce this\n> one. My guess is that nkeycolums should be used there, not ncolums,\n> which is visible in index_incuding tests. The same as previous one, will\n> prepare corresponding changes.\n>\n> > * Do we really need the AM-level boolean flag/argument named\n> > \"scanstart\"? Why not just follow the example of btgettuple(), which\n> > determines whether or not the scan has been initialized based on the\n> > current scan position?\n> >\n> > Just because you set so->currPos.buf to InvalidBuffer doesn't mean you\n> > cannot or should not take the same approach as btgettuple(). And even\n> > if you can't take exactly the same approach, I would still think that\n> > the scan's opaque B-Tree state should remember if it's the first call\n> > to _bt_skip() (rather than some subsequent call) in some other way\n> > (e.g. carrying a \"scanstart\" bool flag directly).\n>\n> Yes, agree, carrying this flag inside the opaque state would be better.\n\nHere is a new version which doesn't require \"scanstart\" argument and\ncontains few other changes to address the issues mentioned earlier. It's\nalso based on the latest UniqueKeys patches with the valgrind issue\nfixed (as before they're attached also just for the references, you can\nfind more in the original thread). I didn't rework commentaries yet,\nwill post it soon (need to get an inspiration first, probably via\nreading Shakespeare unless someone has better suggestions).\n\n> > * Why is it okay to do anything important based on the\n> > _bt_scankey_within_page() return value?\n> >\n> > If the page is empty, then how can we know that it's okay to go to the\n> > next value? I'm concerned that there could be subtle bugs in this\n> > area. VACUUM will usually just delete the empty page. But it won't\n> > always do so, for a variety of reasons that aren't worth going into\n> > now. This could mask bugs in this area. I'm concerned about patterns\n> > like this one from _bt_skip():\n> >\n> > while (!nextFound)\n> > {\n> > ....\n> >\n> > if (_bt_scankey_within_page(scan, so->skipScanKey,\n> > so->currPos.buf, dir))\n> > {\n> > ...\n> > }\n> > else\n> > /*\n> > * If startItup could be not found within the current page,\n> > * assume we found something new\n> > */\n> > nextFound = true;\n> > ....\n> > }\n> >\n> > Why would you assume that \"we found something new\" here? In general I\n> > just don't understand the design of _bt_skip(). I get the basic idea\n> > of what you're trying to do, but it could really use better comments.\n>\n> Yeah, I'll put more efforts into clear comments. There are two different\n> ways in which _bt_scankey_within_page is being used.\n>\n> The first one is to check if it's possible to skip traversal of the tree\n> from root in case if what we're looking for could be on the current\n> page. In this case an empty page would mean we need to search from the\n> root, so not sure what could be the issue here?\n>\n> The second one (that you've highlighted above) I admit is probably the\n> most questionable part of the patch and open for suggestions how to\n> improve it. It's required for one particular case with a cursor when\n> scan advances forward but reads backward. What could happen here is we\n> found one valid item, but the next one e.g. do not pass scan key\n> conditions, and we end up with the previous item again. I'm not entirely\n> sure how presence of an empty page could change this scenario, could you\n> please show an example?\n>\n> > *The \"jump one more time if it's the same as at the beginning\" thing\n> > seems scary to me. Maybe you should be doing something with the actual\n> > high key here.\n>\n> Same as for the previous question, can you give a hint what do you mean\n> by \"doing something with the actual high key\"?\n\nThe question is still there and I would really appreciate clarification\nabout what exactly scenarios I need to look for with empty pages. I've\ntried to perform testing with \"attempt_pagedel = false\" suggestion, but\ndidn't find anything suspicious.",
"msg_date": "Sat, 24 Oct 2020 18:45:53 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On 24/10/2020 19:45, Dmitry Dolgov wrote:\n> Here is a new version which doesn't require \"scanstart\" argument and\n> contains few other changes to address the issues mentioned earlier. It's\n> also based on the latest UniqueKeys patches with the valgrind issue\n> fixed (as before they're attached also just for the references, you can\n> find more in the original thread). I didn't rework commentaries yet,\n> will post it soon (need to get an inspiration first, probably via\n> reading Shakespeare unless someone has better suggestions).\n\nI had a quick look at this patch. I haven't been following this thread, \nso sorry if I'm repeating old arguments, but here we go:\n\n- I'm surprised you need a new index AM function (amskip) for this. \nCan't you just restart the scan with index_rescan()? The btree AM can \ncheck if the new keys are on the same page, and optimize the rescan \naccordingly, like amskip does. That would speed up e.g. nested loop \nscans too, where the keys just happen to be clustered.\n\n- Does this optimization apply to bitmap index scans?\n\n- This logic in build_index_paths() is not correct:\n\n> +\t\t/*\n> +\t\t * Skip scan is not supported when there are qual conditions, which are not\n> +\t\t * covered by index. The reason for that is that those conditions are\n> +\t\t * evaluated later, already after skipping was applied.\n> +\t\t *\n> +\t\t * TODO: This implementation is too restrictive, and doesn't allow e.g.\n> +\t\t * index expressions. For that we need to examine index_clauses too.\n> +\t\t */\n> +\t\tif (root->parse->jointree != NULL)\n> +\t\t{\n> +\t\t\tListCell *lc;\n> +\n> +\t\t\tforeach(lc, (List *)root->parse->jointree->quals)\n> +\t\t\t{\n> +\t\t\t\tNode *expr, *qual = (Node *) lfirst(lc);\n> +\t\t\t\tVar *var;\n> +\t\t\t\tbool found = false;\n> +\n> +\t\t\t\tif (!is_opclause(qual))\n> +\t\t\t\t{\n> +\t\t\t\t\tnot_empty_qual = true;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\n> +\t\t\t\texpr = get_leftop(qual);\n> +\n> +\t\t\t\tif (!IsA(expr, Var))\n> +\t\t\t\t{\n> +\t\t\t\t\tnot_empty_qual = true;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\n> +\t\t\t\tvar = (Var *) expr;\n> +\n> +\t\t\t\tfor (int i = 0; i < index->ncolumns; i++)\n> +\t\t\t\t{\n> +\t\t\t\t\tif (index->indexkeys[i] == var->varattno)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tfound = true;\n> +\t\t\t\t\t\tbreak;\n> +\t\t\t\t\t}\n> +\t\t\t\t}\n> +\n> +\t\t\t\tif (!found)\n> +\t\t\t\t{\n> +\t\t\t\t\tnot_empty_qual = true;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t}\n\nIf you care whether the qual is evaluated by the index AM or not, you \nneed to also check that the operator is indexable. Attached is a query \nthat demonstrates that problem.\n\nI'm actually a bit confused why we need this condition. The IndexScan \nexecutor node should call amskip() only after checking the additional \nquals, no?\n\nAlso, you should probably check that the index quals are in the operator \nfamily as that used for the DISTINCT.\n\n- Heikki",
"msg_date": "Mon, 30 Nov 2020 16:42:20 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Mon, Nov 30, 2020 at 04:42:20PM +0200, Heikki Linnakangas wrote:\n>\n> I had a quick look at this patch. I haven't been following this thread, so\n> sorry if I'm repeating old arguments, but here we go:\n\nThanks!\n\n> - I'm surprised you need a new index AM function (amskip) for this. Can't\n> you just restart the scan with index_rescan()? The btree AM can check if the\n> new keys are on the same page, and optimize the rescan accordingly, like\n> amskip does. That would speed up e.g. nested loop scans too, where the keys\n> just happen to be clustered.\n\nAn interesting point. At the moment I'm not sure whether it's possible\nto implement skipping via index_rescan or not, need to take a look. But\nchecking if the new keys are on the same page would introduce some\noverhead I guess, wouldn't it be too invasive to add it into already\nexisting btree AM?\n\n> - Does this optimization apply to bitmap index scans?\n\nNo, from what I understand it doesn't.\n\n> - This logic in build_index_paths() is not correct:\n>\n> > +\t\t/*\n> > +\t\t * Skip scan is not supported when there are qual conditions, which are not\n> > +\t\t * covered by index. The reason for that is that those conditions are\n> > +\t\t * evaluated later, already after skipping was applied.\n> > +\t\t *\n> > +\t\t * TODO: This implementation is too restrictive, and doesn't allow e.g.\n> > +\t\t * index expressions. For that we need to examine index_clauses too.\n> > +\t\t */\n> > +\t\tif (root->parse->jointree != NULL)\n> > +\t\t{\n> > +\t\t\tListCell *lc;\n> > +\n> > +\t\t\tforeach(lc, (List *)root->parse->jointree->quals)\n> > +\t\t\t{\n> > +\t\t\t\tNode *expr, *qual = (Node *) lfirst(lc);\n> > +\t\t\t\tVar *var;\n> > +\t\t\t\tbool found = false;\n> > +\n> > +\t\t\t\tif (!is_opclause(qual))\n> > +\t\t\t\t{\n> > +\t\t\t\t\tnot_empty_qual = true;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > +\n> > +\t\t\t\texpr = get_leftop(qual);\n> > +\n> > +\t\t\t\tif (!IsA(expr, Var))\n> > +\t\t\t\t{\n> > +\t\t\t\t\tnot_empty_qual = true;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > +\n> > +\t\t\t\tvar = (Var *) expr;\n> > +\n> > +\t\t\t\tfor (int i = 0; i < index->ncolumns; i++)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tif (index->indexkeys[i] == var->varattno)\n> > +\t\t\t\t\t{\n> > +\t\t\t\t\t\tfound = true;\n> > +\t\t\t\t\t\tbreak;\n> > +\t\t\t\t\t}\n> > +\t\t\t\t}\n> > +\n> > +\t\t\t\tif (!found)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tnot_empty_qual = true;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\t}\n> > +\t\t\t}\n> > +\t\t}\n>\n> If you care whether the qual is evaluated by the index AM or not, you need\n> to also check that the operator is indexable. Attached is a query that\n> demonstrates that problem.\n> ...\n> Also, you should probably check that the index quals are in the operator\n> family as that used for the DISTINCT.\n\nYes, good point, will change this in the next version.\n\n> I'm actually a bit confused why we need this condition. The IndexScan\n> executor node should call amskip() only after checking the additional quals,\n> no?\n\nThis part I don't quite get, what exactly you mean by checking the\nadditional quals in the executor node? But at the end of the day this\ncondition was implemented exactly to address the described issue, which\nwas found later and added to the tests.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 21:21:19 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On 01/12/2020 22:21, Dmitry Dolgov wrote:\n>> On Mon, Nov 30, 2020 at 04:42:20PM +0200, Heikki Linnakangas wrote:\n>>\n>> I had a quick look at this patch. I haven't been following this thread, so\n>> sorry if I'm repeating old arguments, but here we go:\n> \n> Thanks!\n> \n>> - I'm surprised you need a new index AM function (amskip) for this. Can't\n>> you just restart the scan with index_rescan()? The btree AM can check if the\n>> new keys are on the same page, and optimize the rescan accordingly, like\n>> amskip does. That would speed up e.g. nested loop scans too, where the keys\n>> just happen to be clustered.\n> \n> An interesting point. At the moment I'm not sure whether it's possible\n> to implement skipping via index_rescan or not, need to take a look. But\n> checking if the new keys are on the same page would introduce some\n> overhead I guess, wouldn't it be too invasive to add it into already\n> existing btree AM?\n\nI think it'll be OK. But if it's not, you could add a hint argument to \nindex_rescan() to hint the index AM that the new key is known to be \ngreater than the previous key.\n\n>> - Does this optimization apply to bitmap index scans?\n> \n> No, from what I understand it doesn't.\n\nWould it be hard to add? Don't need to solve everything in the first \nversion of this, but I think in principle you could do the same \noptimization for bitmap index scans, so if the current API can't do it, \nthat's maybe an indication that the API isn't quite right.\n\n>> - This logic in build_index_paths() is not correct:\n>>\n>>> +\t\t/*\n>>> +\t\t * Skip scan is not supported when there are qual conditions, which are not\n>>> +\t\t * covered by index. The reason for that is that those conditions are\n>>> +\t\t * evaluated later, already after skipping was applied.\n>>> +\t\t *\n>>> +\t\t * TODO: This implementation is too restrictive, and doesn't allow e.g.\n>>> +\t\t * index expressions. For that we need to examine index_clauses too.\n>>> +\t\t */\n>>> +\t\tif (root->parse->jointree != NULL)\n>>> +\t\t{\n>>> +\t\t\tListCell *lc;\n>>> +\n>>> +\t\t\tforeach(lc, (List *)root->parse->jointree->quals)\n>>> +\t\t\t{\n>>> +\t\t\t\tNode *expr, *qual = (Node *) lfirst(lc);\n>>> +\t\t\t\tVar *var;\n>>> +\t\t\t\tbool found = false;\n>>> +\n>>> +\t\t\t\tif (!is_opclause(qual))\n>>> +\t\t\t\t{\n>>> +\t\t\t\t\tnot_empty_qual = true;\n>>> +\t\t\t\t\tbreak;\n>>> +\t\t\t\t}\n>>> +\n>>> +\t\t\t\texpr = get_leftop(qual);\n>>> +\n>>> +\t\t\t\tif (!IsA(expr, Var))\n>>> +\t\t\t\t{\n>>> +\t\t\t\t\tnot_empty_qual = true;\n>>> +\t\t\t\t\tbreak;\n>>> +\t\t\t\t}\n>>> +\n>>> +\t\t\t\tvar = (Var *) expr;\n>>> +\n>>> +\t\t\t\tfor (int i = 0; i < index->ncolumns; i++)\n>>> +\t\t\t\t{\n>>> +\t\t\t\t\tif (index->indexkeys[i] == var->varattno)\n>>> +\t\t\t\t\t{\n>>> +\t\t\t\t\t\tfound = true;\n>>> +\t\t\t\t\t\tbreak;\n>>> +\t\t\t\t\t}\n>>> +\t\t\t\t}\n>>> +\n>>> +\t\t\t\tif (!found)\n>>> +\t\t\t\t{\n>>> +\t\t\t\t\tnot_empty_qual = true;\n>>> +\t\t\t\t\tbreak;\n>>> +\t\t\t\t}\n>>> +\t\t\t}\n>>> +\t\t}\n>>\n>> If you care whether the qual is evaluated by the index AM or not, you need\n>> to also check that the operator is indexable. Attached is a query that\n>> demonstrates that problem.\n>> ...\n>> Also, you should probably check that the index quals are in the operator\n>> family as that used for the DISTINCT.\n> \n> Yes, good point, will change this in the next version.\n> \n>> I'm actually a bit confused why we need this condition. The IndexScan\n>> executor node should call amskip() only after checking the additional quals,\n>> no?\n> \n> This part I don't quite get, what exactly you mean by checking the\n> additional quals in the executor node? But at the end of the day this\n> condition was implemented exactly to address the described issue, which\n> was found later and added to the tests.\n\nAs I understand this, the executor logic goes like this:\n\nquery: SELECT DISTINCT ON (a, b) a, b FROM foo where c like '%y%' and a \nlike 'a%' and b = 'b';\n\n1. Call index_beginscan, keys: a >= 'a', b = 'b'\n\n2. Call index_getnext, which returns first row to the Index Scan node\n\n3. Evaluates the qual \"c like '%y%'\" on the tuple. If it's false, goto \nstep 2 to get next tuple.\n\n4. Return tuple to parent node\n\n5. index_amskip(), to the next tuple with a > 'a'. Goto 2.\n\nThe logic should work fine, even if there are quals that are not \nindexable, like \"c like '%y'\" in the above example. So why doesn't it \nwork? What am I missing?\n\n- Heikki\n\n\n",
"msg_date": "Tue, 1 Dec 2020 22:59:22 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Tue, Dec 01, 2020 at 10:59:22PM +0200, Heikki Linnakangas wrote:\n>\n> > > - Does this optimization apply to bitmap index scans?\n> >\n> > No, from what I understand it doesn't.\n>\n> Would it be hard to add? Don't need to solve everything in the first\n> version of this, but I think in principle you could do the same\n> optimization for bitmap index scans, so if the current API can't do it,\n> that's maybe an indication that the API isn't quite right.\n\nI would expect it should not be hard as at the moment all parts seems\nrelatively generic. But of course I need to check, while it seems no one\nhad bitmap index scans in mind while developing this patch.\n\n> > > I'm actually a bit confused why we need this condition. The IndexScan\n> > > executor node should call amskip() only after checking the additional quals,\n> > > no?\n> >\n> > This part I don't quite get, what exactly you mean by checking the\n> > additional quals in the executor node? But at the end of the day this\n> > condition was implemented exactly to address the described issue, which\n> > was found later and added to the tests.\n>\n> As I understand this, the executor logic goes like this:\n>\n> query: SELECT DISTINCT ON (a, b) a, b FROM foo where c like '%y%' and a\n> like 'a%' and b = 'b';\n>\n> 1. Call index_beginscan, keys: a >= 'a', b = 'b'\n>\n> 2. Call index_getnext, which returns first row to the Index Scan node\n>\n> 3. Evaluates the qual \"c like '%y%'\" on the tuple. If it's false, goto step\n> 2 to get next tuple.\n>\n> 4. Return tuple to parent node\n>\n> 5. index_amskip(), to the next tuple with a > 'a'. Goto 2.\n>\n> The logic should work fine, even if there are quals that are not indexable,\n> like \"c like '%y'\" in the above example. So why doesn't it work? What am I\n> missing?\n\nTo remind myself how it works I went through this sequence, and from\nwhat I understand the qual \"c like '%y%'\" is evaluated in this case in\nExecQual, not after index_getnext_tid (and values returned after\nskipping are reported as filtered out). So when it comes to index_skip\nonly quals on a & b were evaluated. Or did you mean something else?\n\nAnother small detail is that in the current implementation there is no\ngoto 2 in the last step. Originally it was like that, but since skipping\nreturn an exact position that we need there was something like \"find a\nvalue, then do one step back so that index_getnext will find it\".\nUnfortunately this stepping back part turns out to be a source of\ntroubles, and getting rid of it even allowed to make code somewhat more\nconcise. But of course I'm open for suggestions about improvements.\n\n\n",
"msg_date": "Sat, 5 Dec 2020 18:55:42 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 9:59 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 01/12/2020 22:21, Dmitry Dolgov wrote:\n> >> On Mon, Nov 30, 2020 at 04:42:20PM +0200, Heikki Linnakangas wrote:\n> >> - I'm surprised you need a new index AM function (amskip) for this. Can't\n> >> you just restart the scan with index_rescan()? The btree AM can check if the\n> >> new keys are on the same page, and optimize the rescan accordingly, like\n> >> amskip does. That would speed up e.g. nested loop scans too, where the keys\n> >> just happen to be clustered.\n> >\n> > An interesting point. At the moment I'm not sure whether it's possible\n> > to implement skipping via index_rescan or not, need to take a look. But\n> > checking if the new keys are on the same page would introduce some\n> > overhead I guess, wouldn't it be too invasive to add it into already\n> > existing btree AM?\n>\n> I think it'll be OK. But if it's not, you could add a hint argument to\n> index_rescan() to hint the index AM that the new key is known to be\n> greater than the previous key.\n\nFWIW here's what I wrote about that years ago[1]:\n\n> It works by adding a new index operation 'skip' which the executor\n> code can use during a scan to advance to the next value (for some\n> prefix of the index's columns). That may be a terrible idea and\n> totally unnecessary... but let me explain my\n> reasoning:\n>\n> 1. Perhaps some index implementations can do something better than a\n> search for the next key value from the root. Is it possible or\n> desirable to use the current position as a starting point for a btree\n> traversal? I don't know.\n>\n> 2. It seemed that I'd need to create a new search ScanKey to use the\n> 'rescan' interface for skipping to the next value, but I already had\n> an insertion ScanKey so I wanted a way to just reuse that. But maybe\n> there is some other way to reuse existing index interfaces, or maybe\n> there is an easy way to make a new search ScanKey from the existing\n>insertion ScanKey?\n\n[1] https://www.postgresql.org/message-id/CADLWmXWALK8NPZqdnRQiPnrzAnic7NxYKynrkzO_vxYr8enWww%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 6 Dec 2020 09:27:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "Hi Dmitry,\n\nOn Sun, Oct 25, 2020 at 1:45 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Tue, Oct 06, 2020 at 05:20:39PM +0200, Dmitry Dolgov wrote:\n> > > On Mon, Sep 21, 2020 at 05:59:32PM -0700, Peter Geoghegan wrote:\n> > >\n> > > * I see the following compiler warning:\n> > >\n> > > /code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:\n> > > In function ‘populate_baserel_uniquekeys’:\n> > > /code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:797:13:\n> > > warning: ‘expr’ may be used uninitialized in this function\n> > > [-Wmaybe-uninitialized]\n> > > 797 | else if (!list_member(unique_index->rel->reltarget->exprs, expr))\n> > > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> >\n> > This is mostly for UniqueKeys patch, which is attached here only as a\n> > dependency, but I'll prepare changes for that. Interesting enough I\n> > can't reproduce this warning, but if I understand correctly gcc has some\n> > history of spurious uninitialized warnings, so I guess it could be\n> > version dependent.\n> >\n> > > * Perhaps the warning is related to this nearby code that I noticed\n> > > Valgrind complains about:\n> > >\n> > > ==1083468== VALGRINDERROR-BEGIN\n> > > ==1083468== Invalid read of size 4\n> > > ==1083468== at 0x59568A: get_exprs_from_uniqueindex (uniquekeys.c:771)\n> > > ==1083468== by 0x593C5B: populate_baserel_uniquekeys (uniquekeys.c:140)\n> >\n> > This also belongs to UniqueKeys patch, but at least I can reproduce this\n> > one. My guess is that nkeycolums should be used there, not ncolums,\n> > which is visible in index_incuding tests. The same as previous one, will\n> > prepare corresponding changes.\n> >\n> > > * Do we really need the AM-level boolean flag/argument named\n> > > \"scanstart\"? Why not just follow the example of btgettuple(), which\n> > > determines whether or not the scan has been initialized based on the\n> > > current scan position?\n> > >\n> > > Just because you set so->currPos.buf to InvalidBuffer doesn't mean you\n> > > cannot or should not take the same approach as btgettuple(). And even\n> > > if you can't take exactly the same approach, I would still think that\n> > > the scan's opaque B-Tree state should remember if it's the first call\n> > > to _bt_skip() (rather than some subsequent call) in some other way\n> > > (e.g. carrying a \"scanstart\" bool flag directly).\n> >\n> > Yes, agree, carrying this flag inside the opaque state would be better.\n>\n> Here is a new version which doesn't require \"scanstart\" argument and\n> contains few other changes to address the issues mentioned earlier. It's\n> also based on the latest UniqueKeys patches with the valgrind issue\n> fixed (as before they're attached also just for the references, you can\n> find more in the original thread). I didn't rework commentaries yet,\n> will post it soon (need to get an inspiration first, probably via\n> reading Shakespeare unless someone has better suggestions).\n>\n> > > * Why is it okay to do anything important based on the\n> > > _bt_scankey_within_page() return value?\n> > >\n> > > If the page is empty, then how can we know that it's okay to go to the\n> > > next value? I'm concerned that there could be subtle bugs in this\n> > > area. VACUUM will usually just delete the empty page. But it won't\n> > > always do so, for a variety of reasons that aren't worth going into\n> > > now. This could mask bugs in this area. I'm concerned about patterns\n> > > like this one from _bt_skip():\n> > >\n> > > while (!nextFound)\n> > > {\n> > > ....\n> > >\n> > > if (_bt_scankey_within_page(scan, so->skipScanKey,\n> > > so->currPos.buf, dir))\n> > > {\n> > > ...\n> > > }\n> > > else\n> > > /*\n> > > * If startItup could be not found within the current page,\n> > > * assume we found something new\n> > > */\n> > > nextFound = true;\n> > > ....\n> > > }\n> > >\n> > > Why would you assume that \"we found something new\" here? In general I\n> > > just don't understand the design of _bt_skip(). I get the basic idea\n> > > of what you're trying to do, but it could really use better comments.\n> >\n> > Yeah, I'll put more efforts into clear comments. There are two different\n> > ways in which _bt_scankey_within_page is being used.\n> >\n> > The first one is to check if it's possible to skip traversal of the tree\n> > from root in case if what we're looking for could be on the current\n> > page. In this case an empty page would mean we need to search from the\n> > root, so not sure what could be the issue here?\n> >\n> > The second one (that you've highlighted above) I admit is probably the\n> > most questionable part of the patch and open for suggestions how to\n> > improve it. It's required for one particular case with a cursor when\n> > scan advances forward but reads backward. What could happen here is we\n> > found one valid item, but the next one e.g. do not pass scan key\n> > conditions, and we end up with the previous item again. I'm not entirely\n> > sure how presence of an empty page could change this scenario, could you\n> > please show an example?\n> >\n> > > *The \"jump one more time if it's the same as at the beginning\" thing\n> > > seems scary to me. Maybe you should be doing something with the actual\n> > > high key here.\n> >\n> > Same as for the previous question, can you give a hint what do you mean\n> > by \"doing something with the actual high key\"?\n>\n> The question is still there and I would really appreciate clarification\n> about what exactly scenarios I need to look for with empty pages. I've\n> tried to perform testing with \"attempt_pagedel = false\" suggestion, but\n> didn't find anything suspicious.\n\nStatus update for a commitfest entry.\n\nThis patch entry has been \"Waiting on Author\" on CF app and the\ndiscussion seems inactive from the last CF. Could you share the\ncurrent status of this patch? Heikki already sent review comments and\nthere was a discussion but the WoA status is correct? If it needs\nreviews, please rebase the patches and set it to \"Needs Reviews\" on CF\napp. If you're not working on this, I'm going to set it to \"Returned\nwith Feedback\", barring objections.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Jan 2021 21:49:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Thu, Jan 28, 2021 at 09:49:26PM +0900, Masahiko Sawada wrote:\n> Hi Dmitry,\n>\n> Status update for a commitfest entry.\n>\n> This patch entry has been \"Waiting on Author\" on CF app and the\n> discussion seems inactive from the last CF. Could you share the\n> current status of this patch? Heikki already sent review comments and\n> there was a discussion but the WoA status is correct? If it needs\n> reviews, please rebase the patches and set it to \"Needs Reviews\" on CF\n> app. If you're not working on this, I'm going to set it to \"Returned\n> with Feedback\", barring objections.\n\nYes, I'm still on it. In fact, I've sketched up almost immediately\ncouple of changes to address Heikki feedback, but was distracted by\nsubscripting stuff. Will try to send new version of the patch soon.\n\n\n",
"msg_date": "Thu, 28 Jan 2021 17:07:51 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Sat, Dec 05, 2020 at 06:55:42PM +0100, Dmitry Dolgov wrote:\n> > On Tue, Dec 01, 2020 at 10:59:22PM +0200, Heikki Linnakangas wrote:\n> >\n> > > > - Does this optimization apply to bitmap index scans?\n> > >\n> > > No, from what I understand it doesn't.\n> >\n> > Would it be hard to add? Don't need to solve everything in the first\n> > version of this, but I think in principle you could do the same\n> > optimization for bitmap index scans, so if the current API can't do it,\n> > that's maybe an indication that the API isn't quite right.\n>\n> I would expect it should not be hard as at the moment all parts seems\n> relatively generic. But of course I need to check, while it seems no one\n> had bitmap index scans in mind while developing this patch.\n>\n> On Sun, Dec 06, 2020 at 09:27:08AM +1300, Thomas Munro wrote:\n>\n> FWIW here's what I wrote about that years ago[1]:\n> [1] https://www.postgresql.org/message-id/CADLWmXWALK8NPZqdnRQiPnrzAnic7NxYKynrkzO_vxYr8enWww%40mail.gmail.com\n\nThanks, that clarifies this topic a bit.\n\n> > If you care whether the qual is evaluated by the index AM or not, you need\n> > to also check that the operator is indexable. Attached is a query that\n> > demonstrates that problem.\n> > ...\n> > Also, you should probably check that the index quals are in the operator\n> > family as that used for the DISTINCT.\n>\n> Yes, good point, will change this in the next version.\n\nSorry for such long silence, now I've got a bit of free time after\nsubscripting patch to work on this one. Here is rebased version with a\nfew changes to address Heikki feedback about checking if the qual\noperator is indexable. But...\n\nThis version is based on the old version of UniqueKey patch (first two\nattached patches), mostly because IIUC there is still no final version\nof it ([1], [2]). This means index skip scan could be reviewed and\ndiscussed (and I'm planning to review the current design to see if it's\npossible to improve it in the view of the latest changes), but\nindependently of the UniqueKey integration as it's subject to change.\nBut I'm afraid if things will go as it is and there will be not much\nprogress with the UniqueKey patch, I will have to withdraw this one\nuntil everything is sorted out there.\n\n[1]: https://www.postgresql.org/message-id/flat/CAKU4AWpQjAqJwQ2X-aR9g3+ZHRzU1k8hNP7A+_mLuOv-n5aVKA@mail.gmail.com\n[2]: https://www.postgresql.org/message-id/flat/CAKU4AWrU35c9g3cE15JmVwh6B2Hzf4hf7cZUkRsiktv7AKR3Ag@mail.gmail.com",
"msg_date": "Sun, 14 Mar 2021 15:48:33 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "Hi,\n\nI took a look at the new patch series, focusing mostly on the uniquekeys\npart. It'd be a bit tedious to explain all the review comments here, so\nattached is a patch series with a \"review\" patch for some of the parts.\nMost of it is fairly small (corrections to comments etc.), I'll go over\nthe more serious part so that we can discuss it here. I'll keep it split\nper parts of the original patch series.\n\nI suggest looking for XXX and FIXME comments in all the review patches.\n\n\n0001\n----\n\n1) I wonder if the loop in set_append_rel_size should do continue\ninstead of break, to continue with other (not whole row) attributes?\n\n2) A couple comments that'd deserve clarification.\n\n\n0002\n----\n\n1) A bunch of comment fixes, clarifications, etc. Various missing\ncomments for functions added. Please review that I got all the details\nright, etc.\n\n2) Significant reworks of the README.uniquekeys - I found the original\nversion rather hard to understand, hopefully this is better. But I'm not\nsure I got all the details right, so please review.\n\n3) I see set_append_rel_pathlist uses IS_PARTITIONED_REL to decide\nwhether we need to generate unique keys. Why not to try doing the same\nthing for plain append relations? Can't we handle at least the case with\njust a single child relation? (Maybe not, we probably don't know if the\nparent is empty in that case. Then perhaps mention that in a comment.)\n\n4) Is there a reason why populate_joinrel_uniquekeys gets executed after\ntry_partitionwise_join? Does that allow generating additional unique\nkeys, or something like that?\n\n5) A lot of new comments in uniquekeys.c - most of it was seriously\nunder-documented. Please check I got all the details right.\n\n6) Doesn't populate_baserel_uniquekeys need to look at collations too,\nnot just opfamilies?\n\n7) The code does something like this:\n\n if (!ind->unique || !ind->immediate ||\n (ind->indpred != NIL && !ind->predOK))\n continue;\n\nin a number of places. I suggest we define a nicer macro for that.\n\n8) I'm not sure what happens in populate_baserel_uniquekeys if there are\nmultiple restrictinfos for the same expression.\n\n9) I've modified a couple places to replace this:\n\n foreach(lc, list)\n {\n ...\n\n if (a)\n {\n ...\n }\n }\n\nto something like this:\n\n\n foreach(lc, list)\n {\n ...\n\n if (!a)\n continue;\n\n ...\n }\n\nwhich I think is easier to read (it reduces the level of nesting etc.).\nBut I admit it's also a matter of personal taste, to some extent.\n\n10) in populate_partitionedrel_uniquekeys I've also added a return to\nthe first special-case block, so that the second block does not need to\nbe in an else.\n\n11) It seems weird having to build a new copy of the IndexOptInfo using\nsimple_copy_indexinfo_to_parent just to check that the parent/child are\ncompatible. Seems quite expensive, and the code does it quite often. Why\nnot to invent a much smaller struct just for this?\n\n12) Shouldn't populate_grouprel_uniquekeys do the checks for one-row and\ngrouping sets in the opposite order? I mean, with grouping sets it seems\npossible that we produce multiple rows from one-row input rel, no?\n\n13) Doesn't populate_joinrel_uniquekeys also deal with JOIN_RIGHT? If\nnot, maybe explain that in a comment.\n\n14) I find it rather strange that we use innerrel_keeps_unique for both\ninner-outer and outer-inner directions. Perhaps the name is a bit\nmisleading?\n\n15) Why does the first \"innerrel_keeps_unique\" block care about\nJOIN_FULL and the second about JOIN_FULL and JOIN_LEFT?\n\n16) If either of the innerrel_keeps_unique blocks gets executed, doesn't\nthat mean the _ukey_ctx lists may contain incorrect data? Consider that\ninitialize_uniquecontext_for_joinrel sets \"useful=true\" for all indexes,\nand it gets updated only in the blocks. So if te blocks do not execute,\nthis info is wrong, no? If not, why?\n\n17) Isn't the last part of add_combined_uniquekey wrong? It does this:\n\n foreach(lc1, get_exprs_from_uniquekey(..., outer_ukey))\n {\n foreach(lc2, get_exprs_from_uniquekey(..., inner_ukey))\n {\n List *exprs = list_concat_copy(lfirst_node(List, lc1),\n lfirst_node(List, lc2));\n\n joinrel->uniquekeys = lappend(joinrel->uniquekeys,\n makeUniqueKey(exprs, ...));\n }\n }\n\nThat seems to iterate over expressions in the unique keys. But consider\nyou have inner unique key on (a1,a2) and outer unique key (b1,b2). AFAIK\nwe should add (a1,a2,b1,b2) for the join, but this seems to add (a1,b1),\n(a1,b2), (a2,b1), (a2,b2). Seems bogus?\n\n18) I find it a bit annoying that there are no new regression tests.\nSurely we need to test this somehow?\n\n\n0003\n----\n\nJust some comments/whitespace.\n\n\n0004\n----\n\nI wonder why we don't include this in explain TEXT format? Seems it\nmight make it harder to write regression tests for this? It's easier to\njust check that we deduced the right unique key(s) than having to\nconstruct an example where it actually changes the plan.\n\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 17 Mar 2021 03:28:00 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Wed, Mar 17, 2021 at 03:28:00AM +0100, Tomas Vondra wrote:\n> Hi,\n>\n> I took a look at the new patch series, focusing mostly on the uniquekeys\n> part. It'd be a bit tedious to explain all the review comments here, so\n> attached is a patch series with a \"review\" patch for some of the parts.\n\nGreat, thanks.\n\n> Most of it is fairly small (corrections to comments etc.), I'll go over\n> the more serious part so that we can discuss it here. I'll keep it split\n> per parts of the original patch series.\n> I suggest looking for XXX and FIXME comments in all the review patches.\n>\n>\n> 0001\n> ----\n>\n> ....\n>\n> 0002\n> ----\n>\n\nIn fact both 0001 & 0002 belong to another thread, which these days\nspan [1], [2]. I've included them only because they happened to be a\ndependency for index skip scan following David suggestions, sorry if\nit's confusing.\n\nAt the same time the author behind 0001 & 0002 is present in this thread\nas well, maybe Andy can answer these comments right here and better than me.\n\n> 0003\n> ----\n>\n> Just some comments/whitespace.\n>\n>\n> 0004\n> ----\n>\n> I wonder why we don't include this in explain TEXT format? Seems it\n> might make it harder to write regression tests for this? It's easier to\n> just check that we deduced the right unique key(s) than having to\n> construct an example where it actually changes the plan.\n\nYeah, good point. I believe originally it was like that to not make\nexplain too verbose for skip scans, but displaying prefix definitely\ncould be helpful for testing, so will do this (and address other\ncomments as well).\n\n[1]: https://www.postgresql.org/message-id/flat/CAKU4AWpQjAqJwQ2X-aR9g3+ZHRzU1k8hNP7A+_mLuOv-n5aVKA@mail.gmail.com\n[2]: https://www.postgresql.org/message-id/flat/CAKU4AWrU35c9g3cE15JmVwh6B2Hzf4hf7cZUkRsiktv7AKR3Ag@mail.gmail.com\n\n\n",
"msg_date": "Wed, 17 Mar 2021 18:02:16 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "\n\nOn 3/17/21 6:02 PM, Dmitry Dolgov wrote:\n>> On Wed, Mar 17, 2021 at 03:28:00AM +0100, Tomas Vondra wrote:\n>> Hi,\n>>\n>> I took a look at the new patch series, focusing mostly on the uniquekeys\n>> part. It'd be a bit tedious to explain all the review comments here, so\n>> attached is a patch series with a \"review\" patch for some of the parts.\n> \n> Great, thanks.\n> \n>> Most of it is fairly small (corrections to comments etc.), I'll go over\n>> the more serious part so that we can discuss it here. I'll keep it split\n>> per parts of the original patch series.\n>> I suggest looking for XXX and FIXME comments in all the review patches.\n>>\n>>\n>> 0001\n>> ----\n>>\n>> ....\n>>\n>> 0002\n>> ----\n>>\n> \n> In fact both 0001 & 0002 belong to another thread, which these days\n> span [1], [2]. I've included them only because they happened to be a\n> dependency for index skip scan following David suggestions, sorry if\n> it's confusing.\n> \n> At the same time the author behind 0001 & 0002 is present in this thread\n> as well, maybe Andy can answer these comments right here and better than me.\n> \n\nAh, sorry for the confusion. In that case the review comments probably\nbelong to the other threads, so we should move the discussion there.\nIt's not clear to me which of the threads is the right one.\n\n>> 0003\n>> ----\n>>\n>> Just some comments/whitespace.\n>>\n>>\n>> 0004\n>> ----\n>>\n>> I wonder why we don't include this in explain TEXT format? Seems it\n>> might make it harder to write regression tests for this? It's easier to\n>> just check that we deduced the right unique key(s) than having to\n>> construct an example where it actually changes the plan.\n> \n> Yeah, good point. I believe originally it was like that to not make\n> explain too verbose for skip scans, but displaying prefix definitely\n> could be helpful for testing, so will do this (and address other\n> comments as well).\n> \n\nCool. Thanks.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Mar 2021 18:33:01 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "Hi,\n\nHere is another take on the patch with a couple of changes:\n\n* I've removed for now UniqueKeys parts. The interaction of skip scan &\n unique keys patch was actually not that big, so the main difference is\n that now the structure itself went away, a list of unique expressions\n is used instead. All the suggestions about how this feature should\n look like from the planning perspective are still there. On the one\n hand it will allow to develop both patches independently and avoid\n confusion for reviewers, on the other UniqueKeys could be easily\n incorporated back when needed.\n\n* Support for skipping in case of moving backward on demand (scroll\n cursor) is moved into a separate patch. This is implemented via\n returning false from IndexSupportsBackwardScan in case if it's a skip\n scan node, which in turn adds Materialize node on top when needed. The\n name SupportsBackwardScan was a bit confusing for me, but it seems\n it's only being used with a cursorOptions check for CURSOR_OPT_SCROLL.\n Eventually those cases when BackwardScanDirection is used are still\n handled by amskip. This change didn't affect the test coverage, all\n the test cases supported in previous patch versions are still there.\n\n About Materialize node, I guess it could be less performant than a\n \"native\" support, but it simplifies the implementation significantly\n to the point that most parts, which were causing questions before, are\n now located in the isolated patch. My idea here is to concentrate\n efforts on the first three patches in this series, and consider the\n rest of them as an experiment field.\n\n* IndexScan support was also relocated into a separate patch, the first\n three patches are now only about IndexOnlyScan.\n\n* Last bits of reviews were incorporated and rebased.",
"msg_date": "Fri, 21 May 2021 17:31:38 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Fri, May 21, 2021 at 05:31:38PM +0200, Dmitry Dolgov wrote:\n> Hi,\n>\n> Here is another take on the patch with a couple of changes:\n>\n> * I've removed for now UniqueKeys parts. The interaction of skip scan &\n> unique keys patch was actually not that big, so the main difference is\n> that now the structure itself went away, a list of unique expressions\n> is used instead. All the suggestions about how this feature should\n> look like from the planning perspective are still there. On the one\n> hand it will allow to develop both patches independently and avoid\n> confusion for reviewers, on the other UniqueKeys could be easily\n> incorporated back when needed.\n>\n> * Support for skipping in case of moving backward on demand (scroll\n> cursor) is moved into a separate patch. This is implemented via\n> returning false from IndexSupportsBackwardScan in case if it's a skip\n> scan node, which in turn adds Materialize node on top when needed. The\n> name SupportsBackwardScan was a bit confusing for me, but it seems\n> it's only being used with a cursorOptions check for CURSOR_OPT_SCROLL.\n> Eventually those cases when BackwardScanDirection is used are still\n> handled by amskip. This change didn't affect the test coverage, all\n> the test cases supported in previous patch versions are still there.\n>\n> About Materialize node, I guess it could be less performant than a\n> \"native\" support, but it simplifies the implementation significantly\n> to the point that most parts, which were causing questions before, are\n> now located in the isolated patch. My idea here is to concentrate\n> efforts on the first three patches in this series, and consider the\n> rest of them as an experiment field.\n>\n> * IndexScan support was also relocated into a separate patch, the first\n> three patches are now only about IndexOnlyScan.\n>\n> * Last bits of reviews were incorporated and rebased.\n\nWhile the patch is still waiting for a review, I was motivated by the\nthread [1] to think about it from the interface point of view. Consider\nan index skip scan being just like a normal index scan with a set of\nunderspecified leading search keys. It makes sense to have the same\nstructure \"begin scan\" -> \"get the next tuple\" -> \"end scan\" (now I'm\nnot sure if amskip is a good name to represent that, don't have anything\nbetter yet). But the \"underspecified\" part is currently indeed\ninterpreted in a limited way -- as \"missing\" keys -- and is expressed\nonly via the prefix size. Another option would be e.g. leading keys\nconstrained by a range of values, so generally speaking it makes sense\nto extend amount of the information provided for skipping.\n\nAs a naive approach I've added a new patch into the series, containing\nthe extra data structure (ScanLooseKeys, doesn't have much meaning yet\nexcept somehow representing keys for skipping) used for index skip scan.\nAny thoughts about it?\n\nBesides that the new patch version contains some cleaning up and\naddresses commentaries around leaf page pinning from [1]. The idea\nbehind the series structure is still the same: the first three patches\ncontains the essence of the implementation (hoping to help concentrate\nreview), the rest are more \"experimental\".\n\n[1]: https://www.postgresql.org/message-id/flat/CAH2-WzmUscvoxVkokHxP=uPTDjSi0tJkFpUPD-CeA35dvn-CMw@mail.gmail.com",
"msg_date": "Sat, 22 Jan 2022 22:31:33 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 1:32 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Fri, May 21, 2021 at 05:31:38PM +0200, Dmitry Dolgov wrote:\n> > Hi,\n> >\n> > Here is another take on the patch with a couple of changes:\n> >\n> > * I've removed for now UniqueKeys parts. The interaction of skip scan &\n> > unique keys patch was actually not that big, so the main difference is\n> > that now the structure itself went away, a list of unique expressions\n> > is used instead. All the suggestions about how this feature should\n> > look like from the planning perspective are still there. On the one\n> > hand it will allow to develop both patches independently and avoid\n> > confusion for reviewers, on the other UniqueKeys could be easily\n> > incorporated back when needed.\n> >\n> > * Support for skipping in case of moving backward on demand (scroll\n> > cursor) is moved into a separate patch. This is implemented via\n> > returning false from IndexSupportsBackwardScan in case if it's a skip\n> > scan node, which in turn adds Materialize node on top when needed. The\n> > name SupportsBackwardScan was a bit confusing for me, but it seems\n> > it's only being used with a cursorOptions check for CURSOR_OPT_SCROLL.\n> > Eventually those cases when BackwardScanDirection is used are still\n> > handled by amskip. This change didn't affect the test coverage, all\n> > the test cases supported in previous patch versions are still there.\n> >\n> > About Materialize node, I guess it could be less performant than a\n> > \"native\" support, but it simplifies the implementation significantly\n> > to the point that most parts, which were causing questions before, are\n> > now located in the isolated patch. My idea here is to concentrate\n> > efforts on the first three patches in this series, and consider the\n> > rest of them as an experiment field.\n> >\n> > * IndexScan support was also relocated into a separate patch, the first\n> > three patches are now only about IndexOnlyScan.\n> >\n> > * Last bits of reviews were incorporated and rebased.\n>\n> While the patch is still waiting for a review, I was motivated by the\n> thread [1] to think about it from the interface point of view. Consider\n> an index skip scan being just like a normal index scan with a set of\n> underspecified leading search keys. It makes sense to have the same\n> structure \"begin scan\" -> \"get the next tuple\" -> \"end scan\" (now I'm\n> not sure if amskip is a good name to represent that, don't have anything\n> better yet). But the \"underspecified\" part is currently indeed\n> interpreted in a limited way -- as \"missing\" keys -- and is expressed\n> only via the prefix size. Another option would be e.g. leading keys\n> constrained by a range of values, so generally speaking it makes sense\n> to extend amount of the information provided for skipping.\n>\n> As a naive approach I've added a new patch into the series, containing\n> the extra data structure (ScanLooseKeys, doesn't have much meaning yet\n> except somehow representing keys for skipping) used for index skip scan.\n> Any thoughts about it?\n>\n> Besides that the new patch version contains some cleaning up and\n> addresses commentaries around leaf page pinning from [1]. The idea\n> behind the series structure is still the same: the first three patches\n> contains the essence of the implementation (hoping to help concentrate\n> review), the rest are more \"experimental\".\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/CAH2-WzmUscvoxVkokHxP=uPTDjSi0tJkFpUPD-CeA35dvn-CMw@mail.gmail.com\n\nHi,\n\n+ /* If same EC already is already in the list, then not unique */\n\nThe word already is duplicated.\n\n+ * make_pathkeys_for_uniquekeyclauses\n\nThe func name in the comment is different from the actual func name.\n\n+ * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n\nThe year should be 2022 :-)\n\nmake_pathkeys_for_uniquekeys() is called by build_uniquekeys(). Should\nmake_pathkeys_for_uniquekeys() be moved into uniquekeys.c ?\n\n+query_has_uniquekeys_for(PlannerInfo *root, List *path_uniquekeys,\n+ bool allow_multinulls)\n\nIt seems allow_multinulls is not checked in the func. Can the parameter be\nremoved ?\n\n+ Path *newpath;\n+\n+ newpath = (Path *) create_projection_path(root, rel, subpath,\n+ scanjoin_target);\n\nYou can remove variable newpath and assign to lfirst(lc) directly.\n\n +add_path(RelOptInfo *parent_rel, Path *new_path)\n+add_unique_path(RelOptInfo *parent_rel, Path *new_path)\n\nIt seems the above two func's can be combined into one func which\ntakes parent_rel->pathlist / parent_rel->unique_pathlist as third parameter.\n\nCheers\n\nOn Sat, Jan 22, 2022 at 1:32 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Fri, May 21, 2021 at 05:31:38PM +0200, Dmitry Dolgov wrote:\n> Hi,\n>\n> Here is another take on the patch with a couple of changes:\n>\n> * I've removed for now UniqueKeys parts. The interaction of skip scan &\n> unique keys patch was actually not that big, so the main difference is\n> that now the structure itself went away, a list of unique expressions\n> is used instead. All the suggestions about how this feature should\n> look like from the planning perspective are still there. On the one\n> hand it will allow to develop both patches independently and avoid\n> confusion for reviewers, on the other UniqueKeys could be easily\n> incorporated back when needed.\n>\n> * Support for skipping in case of moving backward on demand (scroll\n> cursor) is moved into a separate patch. This is implemented via\n> returning false from IndexSupportsBackwardScan in case if it's a skip\n> scan node, which in turn adds Materialize node on top when needed. The\n> name SupportsBackwardScan was a bit confusing for me, but it seems\n> it's only being used with a cursorOptions check for CURSOR_OPT_SCROLL.\n> Eventually those cases when BackwardScanDirection is used are still\n> handled by amskip. This change didn't affect the test coverage, all\n> the test cases supported in previous patch versions are still there.\n>\n> About Materialize node, I guess it could be less performant than a\n> \"native\" support, but it simplifies the implementation significantly\n> to the point that most parts, which were causing questions before, are\n> now located in the isolated patch. My idea here is to concentrate\n> efforts on the first three patches in this series, and consider the\n> rest of them as an experiment field.\n>\n> * IndexScan support was also relocated into a separate patch, the first\n> three patches are now only about IndexOnlyScan.\n>\n> * Last bits of reviews were incorporated and rebased.\n\nWhile the patch is still waiting for a review, I was motivated by the\nthread [1] to think about it from the interface point of view. Consider\nan index skip scan being just like a normal index scan with a set of\nunderspecified leading search keys. It makes sense to have the same\nstructure \"begin scan\" -> \"get the next tuple\" -> \"end scan\" (now I'm\nnot sure if amskip is a good name to represent that, don't have anything\nbetter yet). But the \"underspecified\" part is currently indeed\ninterpreted in a limited way -- as \"missing\" keys -- and is expressed\nonly via the prefix size. Another option would be e.g. leading keys\nconstrained by a range of values, so generally speaking it makes sense\nto extend amount of the information provided for skipping.\n\nAs a naive approach I've added a new patch into the series, containing\nthe extra data structure (ScanLooseKeys, doesn't have much meaning yet\nexcept somehow representing keys for skipping) used for index skip scan.\nAny thoughts about it?\n\nBesides that the new patch version contains some cleaning up and\naddresses commentaries around leaf page pinning from [1]. The idea\nbehind the series structure is still the same: the first three patches\ncontains the essence of the implementation (hoping to help concentrate\nreview), the rest are more \"experimental\".\n\n[1]: https://www.postgresql.org/message-id/flat/CAH2-WzmUscvoxVkokHxP=uPTDjSi0tJkFpUPD-CeA35dvn-CMw@mail.gmail.comHi,+ /* If same EC already is already in the list, then not unique */The word already is duplicated.+ * make_pathkeys_for_uniquekeyclausesThe func name in the comment is different from the actual func name. + * Portions Copyright (c) 2020, PostgreSQL Global Development GroupThe year should be 2022 :-)make_pathkeys_for_uniquekeys() is called by build_uniquekeys(). Should make_pathkeys_for_uniquekeys() be moved into uniquekeys.c ?+query_has_uniquekeys_for(PlannerInfo *root, List *path_uniquekeys,+ bool allow_multinulls)It seems allow_multinulls is not checked in the func. Can the parameter be removed ?+ Path *newpath;++ newpath = (Path *) create_projection_path(root, rel, subpath,+ scanjoin_target);You can remove variable newpath and assign to lfirst(lc) directly. +add_path(RelOptInfo *parent_rel, Path *new_path)+add_unique_path(RelOptInfo *parent_rel, Path *new_path)It seems the above two func's can be combined into one func which takes parent_rel->pathlist / parent_rel->unique_pathlist as third parameter.Cheers",
"msg_date": "Sun, 23 Jan 2022 16:25:04 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "> On Sun, Jan 23, 2022 at 04:25:04PM -0800, Zhihong Yu wrote:\n> On Sat, Jan 22, 2022 at 1:32 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > Besides that the new patch version contains some cleaning up and\n> > addresses commentaries around leaf page pinning from [1]. The idea\n> > behind the series structure is still the same: the first three patches\n> > contains the essence of the implementation (hoping to help concentrate\n> > review), the rest are more \"experimental\".\n> >\n> > [1]:\n> > https://www.postgresql.org/message-id/flat/CAH2-WzmUscvoxVkokHxP=uPTDjSi0tJkFpUPD-CeA35dvn-CMw@mail.gmail.com\n>\n> Hi,\n>\n> + /* If same EC already is already in the list, then not unique */\n>\n> The word already is duplicated.\n>\n> + * make_pathkeys_for_uniquekeyclauses\n>\n> The func name in the comment is different from the actual func name.\n\nThanks for the review! Right, both above make sense. I'll wait a bit if\nthere will be more commentaries, and then post a new version with all\nchanges at once.\n\n> + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n>\n> The year should be 2022 :-)\n\nNow you see how old is this patch :)\n\n> make_pathkeys_for_uniquekeys() is called by build_uniquekeys(). Should\n> make_pathkeys_for_uniquekeys() be moved into uniquekeys.c ?\n\nIt's actually placed there by analogy with make_pathkeys_for_sortclauses\n(immediately preceding function), so I think moving it into uniquekeys\nwill only make more confusion.\n\n> +query_has_uniquekeys_for(PlannerInfo *root, List *path_uniquekeys,\n> + bool allow_multinulls)\n>\n> It seems allow_multinulls is not checked in the func. Can the parameter be\n> removed ?\n\nRight, it could be removed. I believe it was somewhat important when the\npatch was tightly coupled with the UniqueKeys patch, where it was put in\nuse.\n\n> + Path *newpath;\n> +\n> + newpath = (Path *) create_projection_path(root, rel, subpath,\n> + scanjoin_target);\n>\n> You can remove variable newpath and assign to lfirst(lc) directly.\n\nYes, but I've followed the same style for create_projection_path as in\nmany other invocations of this function in planner.c -- I would prefer\nto keep it uniform.\n\n> +add_path(RelOptInfo *parent_rel, Path *new_path)\n> +add_unique_path(RelOptInfo *parent_rel, Path *new_path)\n>\n> It seems the above two func's can be combined into one func which\n> takes parent_rel->pathlist / parent_rel->unique_pathlist as third parameter.\n\nSure, but here I've intentionally split it into separate functions,\notherwise a lot of not relevant call sites have to be updated to provide\nthe third parameter.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 17:51:27 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 8:51 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Sun, Jan 23, 2022 at 04:25:04PM -0800, Zhihong Yu wrote:\n> > On Sat, Jan 22, 2022 at 1:32 PM Dmitry Dolgov <9erthalion6@gmail.com>\n> wrote:\n> > > Besides that the new patch version contains some cleaning up and\n> > > addresses commentaries around leaf page pinning from [1]. The idea\n> > > behind the series structure is still the same: the first three patches\n> > > contains the essence of the implementation (hoping to help concentrate\n> > > review), the rest are more \"experimental\".\n> > >\n> > > [1]:\n> > >\n> https://www.postgresql.org/message-id/flat/CAH2-WzmUscvoxVkokHxP=uPTDjSi0tJkFpUPD-CeA35dvn-CMw@mail.gmail.com\n> >\n> > Hi,\n> >\n> > + /* If same EC already is already in the list, then not unique */\n> >\n> > The word already is duplicated.\n> >\n> > + * make_pathkeys_for_uniquekeyclauses\n> >\n> > The func name in the comment is different from the actual func name.\n>\n> Thanks for the review! Right, both above make sense. I'll wait a bit if\n> there will be more commentaries, and then post a new version with all\n> changes at once.\n>\n> > + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n> >\n> > The year should be 2022 :-)\n>\n> Now you see how old is this patch :)\n>\n> > make_pathkeys_for_uniquekeys() is called by build_uniquekeys(). Should\n> > make_pathkeys_for_uniquekeys() be moved into uniquekeys.c ?\n>\n> It's actually placed there by analogy with make_pathkeys_for_sortclauses\n> (immediately preceding function), so I think moving it into uniquekeys\n> will only make more confusion.\n>\n> > +query_has_uniquekeys_for(PlannerInfo *root, List *path_uniquekeys,\n> > + bool allow_multinulls)\n> >\n> > It seems allow_multinulls is not checked in the func. Can the parameter\n> be\n> > removed ?\n>\n> Right, it could be removed. I believe it was somewhat important when the\n> patch was tightly coupled with the UniqueKeys patch, where it was put in\n> use.\n>\n> Hi,\nIt would be nice to take out this unused parameter for this patch.\n\nThe parameter should be added in patch series where it is used.\n\nCheers\n\nOn Mon, Jan 24, 2022 at 8:51 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Sun, Jan 23, 2022 at 04:25:04PM -0800, Zhihong Yu wrote:\n> On Sat, Jan 22, 2022 at 1:32 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > Besides that the new patch version contains some cleaning up and\n> > addresses commentaries around leaf page pinning from [1]. The idea\n> > behind the series structure is still the same: the first three patches\n> > contains the essence of the implementation (hoping to help concentrate\n> > review), the rest are more \"experimental\".\n> >\n> > [1]:\n> > https://www.postgresql.org/message-id/flat/CAH2-WzmUscvoxVkokHxP=uPTDjSi0tJkFpUPD-CeA35dvn-CMw@mail.gmail.com\n>\n> Hi,\n>\n> + /* If same EC already is already in the list, then not unique */\n>\n> The word already is duplicated.\n>\n> + * make_pathkeys_for_uniquekeyclauses\n>\n> The func name in the comment is different from the actual func name.\n\nThanks for the review! Right, both above make sense. I'll wait a bit if\nthere will be more commentaries, and then post a new version with all\nchanges at once.\n\n> + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n>\n> The year should be 2022 :-)\n\nNow you see how old is this patch :)\n\n> make_pathkeys_for_uniquekeys() is called by build_uniquekeys(). Should\n> make_pathkeys_for_uniquekeys() be moved into uniquekeys.c ?\n\nIt's actually placed there by analogy with make_pathkeys_for_sortclauses\n(immediately preceding function), so I think moving it into uniquekeys\nwill only make more confusion.\n\n> +query_has_uniquekeys_for(PlannerInfo *root, List *path_uniquekeys,\n> + bool allow_multinulls)\n>\n> It seems allow_multinulls is not checked in the func. Can the parameter be\n> removed ?\n\nRight, it could be removed. I believe it was somewhat important when the\npatch was tightly coupled with the UniqueKeys patch, where it was put in\nuse.Hi,It would be nice to take out this unused parameter for this patch.The parameter should be added in patch series where it is used.Cheers",
"msg_date": "Mon, 24 Jan 2022 10:17:57 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
},
{
"msg_contents": "On 6/9/20 06:22, Dmitry Dolgov wrote:\n> Here is a new version of index skip scan patch, based on v8 patch for\n> UniqueKeys implementation from [1]. I want to start a new thread to\n> simplify navigation, hopefully I didn't forget anyone who actively\n> participated in the discussion.\n>\n\nThis CommitFest entry has been closed with RwF at [1].\n\nThanks for all the feedback given !\n\n[1] \nhttps://www.postgresql.org/message-id/ab8636e7-182f-886a-3a39-f3fc279ca45d%40redhat.com\n\nBest regards,\n Dmitry & Jesper\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 07:39:31 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan (new UniqueKeys)"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgres 12 introduced TableAm api. Although as far as I can see, currently only heap is\nincluded as access method, it is fair to imagine that users will start adding their own methods\nand more methods to be included in Postgres core.\n\nWith that in mind, it might be desirable for a user to see the access method when describing\nin verbose mode, eg. `\\d+`.\n\nA small patch is attached [1] to see if you think it makes sense. I have not included any\ndifferences in the tests output yet, as the idea might get discarded. However if the patch is\nfound useful. I shall ament the test results as needed.\n\nCheers,\n//Georgios",
"msg_date": "Tue, 09 Jun 2020 10:29:44 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Include access method in listTables output"
},
{
"msg_contents": "On Tue, 9 Jun 2020 at 23:03, Georgios <gkokolatos@protonmail.com> wrote:\n> A small patch is attached [1] to see if you think it makes sense. I have not included any\n> differences in the tests output yet, as the idea might get discarded. However if the patch is\n> found useful. I shall ament the test results as needed.\n\nIt seems like a fair thing to need/want. We do already show this in\n\\d+ tablename, so it seems pretty fair to want it in the \\d+ output\ntoo\n\nPlease add it to the commitfest at https://commitfest.postgresql.org/28/\n\nDavid\n\n\n",
"msg_date": "Tue, 9 Jun 2020 23:34:51 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Tuesday, June 9, 2020 1:34 PM, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 9 Jun 2020 at 23:03, Georgios gkokolatos@protonmail.com wrote:\n>\n> > A small patch is attached [1] to see if you think it makes sense. I have not included any\n> > differences in the tests output yet, as the idea might get discarded. However if the patch is\n> > found useful. I shall ament the test results as needed.\n>\n> It seems like a fair thing to need/want. We do already show this in\n> \\d+ tablename, so it seems pretty fair to want it in the \\d+ output\n> too\n>\n> Please add it to the commitfest at https://commitfest.postgresql.org/28/\n\nThank you very much for your time. Added to the commitfest as suggested.\n\n>\n> David\n\n\n\n\n",
"msg_date": "Tue, 09 Jun 2020 12:19:31 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 6:45 PM Georgios <gkokolatos@protonmail.com> wrote:\n>\n\n> > Please add it to the commitfest at https://commitfest.postgresql.org/28/\n>\n> Thank you very much for your time. Added to the commitfest as suggested.\n\nPatch applies cleanly, make check & make check-world passes.\n\nFew comments:\n+ if (pset.sversion >= 120000)\n+ appendPQExpBufferStr(&buf,\n+ \"\\n LEFT JOIN pg_catalog.pg_am am ON am.oid = c.relam\");\n\nShould we include pset.hide_tableam check along with the version check?\n\n+ if (pset.sversion >= 120000 && !pset.hide_tableam)\n+ {\n+ appendPQExpBuffer(&buf,\n+ \",\\n am.amname as \\\"%s\\\"\",\n+ gettext_noop(\"Access Method\"));\n+ }\n\nBraces can be removed & the second line can be combined with earlier.\n\nWe could include the test in psql file by configuring HIDE_TABLEAM.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jun 2020 06:50:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Monday, June 15, 2020 3:20 AM, vignesh C <vignesh21@gmail.com> wrote:\n\n> On Tue, Jun 9, 2020 at 6:45 PM Georgios gkokolatos@protonmail.com wrote:\n>\n> >\n>\n> > > Please add it to the commitfest at https://commitfest.postgresql.org/28/\n> >\n> > Thank you very much for your time. Added to the commitfest as suggested.\n>\n> Patch applies cleanly, make check & make check-world passes.\n\nThank you for your time and comments! Please find v2 of the patch\nattached.\n\n>\n> Few comments:\n>\n> - if (pset.sversion >= 120000)\n>\n> - appendPQExpBufferStr(&buf,\n>\n> - \"\\n LEFT JOIN pg_catalog.pg_am am ON am.oid = c.relam\");\n>\n> Should we include pset.hide_tableam check along with the version check?\n\nI opted against it, since it seems more intuitive to have a single\nswitch and placed on the display part. A similar pattern can be found\nin describeOneTableDetails(). I do not hold a strong opinion and will\ngladly ament if insisted upon.\n\n>\n> - if (pset.sversion >= 120000 && !pset.hide_tableam)\n>\n> - {\n>\n> - appendPQExpBuffer(&buf,\n>\n> - \",\\n am.amname as \\\"%s\\\"\",\n>\n> - gettext_noop(\"Access Method\"));\n>\n> - }\n>\n> Braces can be removed & the second line can be combined with earlier.\n\nAgreed on the braces.\n\nDisagree on the second line as this style is in line with the rest of\ncode. Consistency, buf, format string and gettext_noop() are found in\ntheir own lines within this file.\n\n>\n> We could include the test in psql file by configuring HIDE_TABLEAM.\n>\n\nAgreed.\n\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>",
"msg_date": "Tue, 16 Jun 2020 12:43:40 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 6:13 PM Georgios <gkokolatos@protonmail.com> wrote:\n> > Few comments:\n> >\n> > - if (pset.sversion >= 120000)\n> >\n> > - appendPQExpBufferStr(&buf,\n> >\n> > - \"\\n LEFT JOIN pg_catalog.pg_am am ON am.oid = c.relam\");\n> >\n> > Should we include pset.hide_tableam check along with the version check?\n>\n> I opted against it, since it seems more intuitive to have a single\n> switch and placed on the display part. A similar pattern can be found\n> in describeOneTableDetails(). I do not hold a strong opinion and will\n> gladly ament if insisted upon.\n>\n\nI felt we could add that check as we might be including\npg_catalog.pg_am in cases even though we really don't need it.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 20 Jun 2020 18:45:20 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Saturday, June 20, 2020 3:15 PM, vignesh C <vignesh21@gmail.com> wrote:\n\n> On Tue, Jun 16, 2020 at 6:13 PM Georgios gkokolatos@protonmail.com wrote:\n>\n> > > Few comments:\n> > >\n> > > - if (pset.sversion >= 120000)\n> > >\n> > > - appendPQExpBufferStr(&buf,\n> > >\n> > > - \"\\n LEFT JOIN pg_catalog.pg_am am ON am.oid = c.relam\");\n> > > Should we include pset.hide_tableam check along with the version check?\n> > >\n> >\n> > I opted against it, since it seems more intuitive to have a single\n> > switch and placed on the display part. A similar pattern can be found\n> > in describeOneTableDetails(). I do not hold a strong opinion and will\n> > gladly ament if insisted upon.\n>\n> I felt we could add that check as we might be including\n> pg_catalog.pg_am in cases even though we really don't need it.\n\nAs promised, I gladly ament upon your request. Also included a fix for\na minor oversight in tests, they should now be stable. Finally in this\nversion, I extended a bit the logic to only include the access method column\nif the relations displayed can have one, for example sequences.\n\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 30 Jun 2020 09:23:26 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Tue, Jun 30, 2020 at 2:53 PM Georgios <gkokolatos@protonmail.com> wrote:\n>\n>\n> As promised, I gladly ament upon your request. Also included a fix for\n> a minor oversight in tests, they should now be stable. Finally in this\n> version, I extended a bit the logic to only include the access method column\n> if the relations displayed can have one, for example sequences.\n>\n\nPatch applies cleanly, make check & make check-world passes.\nOne comment:\n+ if (pset.sversion >= 120000 && !pset.hide_tableam &&\n+ (showTables || showViews || showMatViews ||\nshowIndexes))\n+ appendPQExpBuffer(&buf,\n+ \",\\n\nam.amname as \\\"%s\\\"\",\n+\ngettext_noop(\"Access Method\"));\n\nI'm not sure if we should include showViews, I had seen that the\naccess method was not getting selected for view.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 5 Jul 2020 07:13:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Sun, Jul 05, 2020 at 07:13:10AM +0530, vignesh C wrote:\n> I'm not sure if we should include showViews, I had seen that the\n> access method was not getting selected for view.\n\n+1. These have no physical storage, so you are looking here for\nrelkinds that satisfy RELKIND_HAS_STORAGE().\n--\nMichael",
"msg_date": "Mon, 6 Jul 2020 10:12:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Monday, July 6, 2020 3:12 AM, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Jul 05, 2020 at 07:13:10AM +0530, vignesh C wrote:\n>\n> > I'm not sure if we should include showViews, I had seen that the\n> > access method was not getting selected for view.\n>\n> +1. These have no physical storage, so you are looking here for\n> relkinds that satisfy RELKIND_HAS_STORAGE().\n\nThank you for the review.\nFind attached v4 of the patch.\n\n>\n> ---------------------------------------------------------------------------------------------------------------\n>\n> Michael",
"msg_date": "Mon, 06 Jul 2020 07:54:12 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Mon, Jul 6, 2020 at 1:24 PM Georgios <gkokolatos@protonmail.com> wrote:\n>\n>\n>\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Monday, July 6, 2020 3:12 AM, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > On Sun, Jul 05, 2020 at 07:13:10AM +0530, vignesh C wrote:\n> >\n> > > I'm not sure if we should include showViews, I had seen that the\n> > > access method was not getting selected for view.\n> >\n> > +1. These have no physical storage, so you are looking here for\n> > relkinds that satisfy RELKIND_HAS_STORAGE().\n>\n> Thank you for the review.\n> Find attached v4 of the patch.\n\nThanks for fixing the comments.\nPatch applies cleanly, make check & make check-world passes.\nI was not sure if any documentation change is required or not for this\nin doc/src/sgml/ref/psql-ref.sgml. Thoughts?\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 11 Jul 2020 18:46:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Saturday, July 11, 2020 3:16 PM, vignesh C <vignesh21@gmail.com> wrote:\n\n> On Mon, Jul 6, 2020 at 1:24 PM Georgios gkokolatos@protonmail.com wrote:\n>\n> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> > On Monday, July 6, 2020 3:12 AM, Michael Paquier michael@paquier.xyz wrote:\n> >\n> > > On Sun, Jul 05, 2020 at 07:13:10AM +0530, vignesh C wrote:\n> > >\n> > > > I'm not sure if we should include showViews, I had seen that the\n> > > > access method was not getting selected for view.\n> > >\n> > > +1. These have no physical storage, so you are looking here for\n> > > relkinds that satisfy RELKIND_HAS_STORAGE().\n> >\n> > Thank you for the review.\n> > Find attached v4 of the patch.\n>\n> Thanks for fixing the comments.\n> Patch applies cleanly, make check & make check-world passes.\n> I was not sure if any documentation change is required or not for this\n> in doc/src/sgml/ref/psql-ref.sgml. Thoughts?\n>\n\nThank you for the comment. I added a line in docs. Admittedly, might need tweaking.\n\nPlease find version 5 of the patch attached.\n\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 16 Jul 2020 14:05:52 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Thu, Jul 16, 2020 at 7:35 PM Georgios <gkokolatos@protonmail.com> wrote:\n>\n>\n>\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Saturday, July 11, 2020 3:16 PM, vignesh C <vignesh21@gmail.com> wrote:\n>\n> > On Mon, Jul 6, 2020 at 1:24 PM Georgios gkokolatos@protonmail.com wrote:\n> >\n> > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> > > On Monday, July 6, 2020 3:12 AM, Michael Paquier michael@paquier.xyz wrote:\n> > >\n> > > > On Sun, Jul 05, 2020 at 07:13:10AM +0530, vignesh C wrote:\n> > > >\n> > > > > I'm not sure if we should include showViews, I had seen that the\n> > > > > access method was not getting selected for view.\n> > > >\n> > > > +1. These have no physical storage, so you are looking here for\n> > > > relkinds that satisfy RELKIND_HAS_STORAGE().\n> > >\n> > > Thank you for the review.\n> > > Find attached v4 of the patch.\n> >\n> > Thanks for fixing the comments.\n> > Patch applies cleanly, make check & make check-world passes.\n> > I was not sure if any documentation change is required or not for this\n> > in doc/src/sgml/ref/psql-ref.sgml. Thoughts?\n> >\n>\n> Thank you for the comment. I added a line in docs. Admittedly, might need tweaking.\n>\n> Please find version 5 of the patch attached.\n\nMost changes looks fine, minor comments:\n\n \\pset tuples_only false\n -- check conditional tableam display\n--- Create a heap2 table am handler with heapam handler\n+\\pset expanded off\n+CREATE SCHEMA conditional_tableam_display;\n+CREATE ROLE conditional_tableam_display_role;\n+ALTER SCHEMA conditional_tableam_display OWNER TO\nconditional_tableam_display_role;\n+SET search_path TO conditional_tableam_display;\n CREATE ACCESS METHOD heap_psql TYPE TABLE HANDLER heap_tableam_handler;\n\nThis comment might have been removed unintentionally, do you want to\nadd it back?\n\n+-- access method column should not be displayed for sequences\n+\\ds+\n+ List of relations\n+ Schema | Name | Type | Owner | Persistence | Size | Description\n+--------+------+------+-------+-------------+------+-------------\n+(0 rows)\n\nWe can include one test for view.\n\n+ if (pset.sversion >= 120000 && !pset.hide_tableam &&\n+ (showTables || showMatViews || showIndexes))\n+ appendPQExpBuffer(&buf,\n+ \",\\n am.amname as \\\"%s\\\"\",\n+ gettext_noop(\"Access Method\"));\n+\n /*\n * As of PostgreSQL 9.0, use pg_table_size() to show a more accurate\n * size of a table, including FSM, VM and TOAST tables.\n@@ -3772,6 +3778,12 @@ listTables(const char *tabtypes, const char\n*pattern, bool verbose, bool showSys\n appendPQExpBufferStr(&buf,\n \"\\nFROM pg_catalog.pg_class c\"\n \"\\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace\");\n+\n+ if (pset.sversion >= 120000 && !pset.hide_tableam &&\n+ (showTables || showMatViews || showIndexes))\n\nIf conditions in both the places are the same, do you want to make it a macro?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 25 Jul 2020 06:11:33 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Saturday, July 25, 2020 2:41 AM, vignesh C <vignesh21@gmail.com> wrote:\n\n> On Thu, Jul 16, 2020 at 7:35 PM Georgios gkokolatos@protonmail.com wrote:\n>\n> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> > On Saturday, July 11, 2020 3:16 PM, vignesh C vignesh21@gmail.com wrote:\n> >\n> > > On Mon, Jul 6, 2020 at 1:24 PM Georgios gkokolatos@protonmail.com wrote:\n> > >\n> > > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> > > > On Monday, July 6, 2020 3:12 AM, Michael Paquier michael@paquier.xyz wrote:\n> > > >\n> > > > > On Sun, Jul 05, 2020 at 07:13:10AM +0530, vignesh C wrote:\n> > > > >\n> > > > > > I'm not sure if we should include showViews, I had seen that the\n> > > > > > access method was not getting selected for view.\n> > > > >\n> > > > > +1. These have no physical storage, so you are looking here for\n> > > > > relkinds that satisfy RELKIND_HAS_STORAGE().\n> > > >\n> > > > Thank you for the review.\n> > > > Find attached v4 of the patch.\n> > >\n> > > Thanks for fixing the comments.\n> > > Patch applies cleanly, make check & make check-world passes.\n> > > I was not sure if any documentation change is required or not for this\n> > > in doc/src/sgml/ref/psql-ref.sgml. Thoughts?\n> >\n> > Thank you for the comment. I added a line in docs. Admittedly, might need tweaking.\n> > Please find version 5 of the patch attached.\n>\n> Most changes looks fine, minor comments:\n\nThank you for your comments. Please find the individual answers inline for completeness.\n\nI'm having issues understanding where you are going with the reviews, can you fully describe\nwhat is it that you wish to be done?\n\n>\n> \\pset tuples_only false\n> -- check conditional tableam display\n> --- Create a heap2 table am handler with heapam handler\n> +\\pset expanded off\n> +CREATE SCHEMA conditional_tableam_display;\n> +CREATE ROLE conditional_tableam_display_role;\n> +ALTER SCHEMA conditional_tableam_display OWNER TO\n> conditional_tableam_display_role;\n> +SET search_path TO conditional_tableam_display;\n> CREATE ACCESS METHOD heap_psql TYPE TABLE HANDLER heap_tableam_handler;\n>\n> This comment might have been removed unintentionally, do you want to\n> add it back?\n\n\nIt was intentional as heap2 table am handler is not getting created. With\nthe code additions the comment does seem out of place and thus removed.\n\n>\n> +-- access method column should not be displayed for sequences\n> +\\ds+\n>\n> - List of relations\n>\n>\n> - Schema | Name | Type | Owner | Persistence | Size | Description\n> +--------+------+------+-------+-------------+------+-------------\n> +(0 rows)\n>\n> We can include one test for view.\n\nThe list of cases in the test for both including and excluding storage is by no means\nexhaustive. I thought that this is acceptable. Adding a test for the view, will still\nnot make it the test exhaustive. Are you sure you only suggest the view to be included\nor you would suggest an exhaustive list of tests.\n\n>\n> - if (pset.sversion >= 120000 && !pset.hide_tableam &&\n>\n> - (showTables || showMatViews || showIndexes))\n>\n> - appendPQExpBuffer(&buf,\n>\n> - \",\\n am.amname as \\\"%s\\\"\",\n>\n> - gettext_noop(\"Access Method\"));\n>\n> - /*\n> - As of PostgreSQL 9.0, use pg_table_size() to show a more accurate\n> - size of a table, including FSM, VM and TOAST tables.\n> @@ -3772,6 +3778,12 @@ listTables(const char *tabtypes, const char\n> *pattern, bool verbose, bool showSys\n> appendPQExpBufferStr(&buf,\n> \"\\nFROM pg_catalog.pg_class c\"\n> \"\\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace\");\n>\n> -\n> - if (pset.sversion >= 120000 && !pset.hide_tableam &&\n>\n> - (showTables || showMatViews || showIndexes))\n>\n> If conditions in both the places are the same, do you want to make it a macro?\n\nNo, I would rather not if you may. I think that a macro would not add to the readability\nrather it would remove from it. Those are two simple conditionals used in the same function\nvery close to each other.\n\n\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\n\n\n",
"msg_date": "Wed, 29 Jul 2020 14:00:47 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Wed, Jul 29, 2020 at 7:30 PM Georgios <gkokolatos@protonmail.com> wrote:\n>\n>\n> I'm having issues understanding where you are going with the reviews, can you fully describe\n> what is it that you wish to be done?\n>\n\nI'm happy with the patch, that was the last of the comments that I had\nfrom my side. Only idea here is to review every line of the code\nbefore passing it to the committer.\n\n> >\n> > \\pset tuples_only false\n> > -- check conditional tableam display\n> > --- Create a heap2 table am handler with heapam handler\n> > +\\pset expanded off\n> > +CREATE SCHEMA conditional_tableam_display;\n> > +CREATE ROLE conditional_tableam_display_role;\n> > +ALTER SCHEMA conditional_tableam_display OWNER TO\n> > conditional_tableam_display_role;\n> > +SET search_path TO conditional_tableam_display;\n> > CREATE ACCESS METHOD heap_psql TYPE TABLE HANDLER heap_tableam_handler;\n> >\n> > This comment might have been removed unintentionally, do you want to\n> > add it back?\n>\n>\n> It was intentional as heap2 table am handler is not getting created. With\n> the code additions the comment does seem out of place and thus removed.\n>\n\nThanks for clarifying it, I wasn't sure if it was completely intentional.\n\n> >\n> > +-- access method column should not be displayed for sequences\n> > +\\ds+\n> >\n> > - List of relations\n> >\n> >\n> > - Schema | Name | Type | Owner | Persistence | Size | Description\n> > +--------+------+------+-------+-------------+------+-------------\n> > +(0 rows)\n> >\n> > We can include one test for view.\n>\n> The list of cases in the test for both including and excluding storage is by no means\n> exhaustive. I thought that this is acceptable. Adding a test for the view, will still\n> not make it the test exhaustive. Are you sure you only suggest the view to be included\n> or you would suggest an exhaustive list of tests.\n>\n\nI had noticed this case while reviewing, you can take a call on it. It\nis very difficult to come to a conclusion on what needs to be included\nand what need not be included. I had noticed you have added a test\ncase for sequence. I felt views are similar in this case.\n\n> >\n> > - if (pset.sversion >= 120000 && !pset.hide_tableam &&\n> >\n> > - (showTables || showMatViews || showIndexes))\n> >\n> > - appendPQExpBuffer(&buf,\n> >\n> > - \",\\n am.amname as \\\"%s\\\"\",\n> >\n> > - gettext_noop(\"Access Method\"));\n> >\n> > - /*\n> > - As of PostgreSQL 9.0, use pg_table_size() to show a more accurate\n> > - size of a table, including FSM, VM and TOAST tables.\n> > @@ -3772,6 +3778,12 @@ listTables(const char *tabtypes, const char\n> > *pattern, bool verbose, bool showSys\n> > appendPQExpBufferStr(&buf,\n> > \"\\nFROM pg_catalog.pg_class c\"\n> > \"\\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace\");\n> >\n> > -\n> > - if (pset.sversion >= 120000 && !pset.hide_tableam &&\n> >\n> > - (showTables || showMatViews || showIndexes))\n> >\n> > If conditions in both the places are the same, do you want to make it a macro?\n>\n> No, I would rather not if you may. I think that a macro would not add to the readability\n> rather it would remove from it. Those are two simple conditionals used in the same function\n> very close to each other.\n>\n\nOk, we can ignore this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 1 Aug 2020 08:12:53 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Sat, Aug 1, 2020 at 8:12 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > +-- access method column should not be displayed for sequences\n> > > +\\ds+\n> > >\n> > > - List of relations\n> > >\n> > >\n> > > - Schema | Name | Type | Owner | Persistence | Size | Description\n> > > +--------+------+------+-------+-------------+------+-------------\n> > > +(0 rows)\n> > >\n> > > We can include one test for view.\n\nI felt adding one test for view is good and added it.\nAttached a new patch for the same.\n\nI felt patch is in shape for committer to have a look at this.\n\nRegards,\nVignesh",
"msg_date": "Mon, 17 Aug 2020 23:10:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 11:10:05PM +0530, vignesh C wrote:\n> On Sat, Aug 1, 2020 at 8:12 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > +-- access method column should not be displayed for sequences\n> > > > +\\ds+\n> > > >\n> > > > - List of relations\n> > > >\n> > > >\n> > > > - Schema | Name | Type | Owner | Persistence | Size | Description\n> > > > +--------+------+------+-------+-------------+------+-------------\n> > > > +(0 rows)\n> > > >\n> > > > We can include one test for view.\n> \n> I felt adding one test for view is good and added it.\n> Attached a new patch for the same.\n> \n> I felt patch is in shape for committer to have a look at this.\n\nThe patch tester shows it's failing xmllint ; could you send a fixed patch ?\n\n/usr/bin/xmllint --path . --noout --valid postgres.sgml\nref/psql-ref.sgml:1189: parser error : Opening and ending tag mismatch: link line 1187 and para\n\nhttp://cfbot.cputube.org/georgios-kokolatos.html\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 19 Aug 2020 23:31:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Thursday, 20 August 2020 07:31, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Aug 17, 2020 at 11:10:05PM +0530, vignesh C wrote:\n>\n> > On Sat, Aug 1, 2020 at 8:12 AM vignesh C vignesh21@gmail.com wrote:\n> >\n> > > > > +-- access method column should not be displayed for sequences\n> > > > > +\\ds+\n> > > > >\n> > > > > - List of relations\n> > > > >\n> > > > >\n> > > > > - Schema | Name | Type | Owner | Persistence | Size | Description\n> > > > > +--------+------+------+-------+-------------+------+-------------\n> > > > > +(0 rows)\n> > > > > We can include one test for view.\n> > > > >\n> >\n> > I felt adding one test for view is good and added it.\n> > Attached a new patch for the same.\n> > I felt patch is in shape for committer to have a look at this.\n>\n> The patch tester shows it's failing xmllint ; could you send a fixed patch ?\n>\n> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> ref/psql-ref.sgml:1189: parser error : Opening and ending tag mismatch: link line 1187 and para\n>\n> http://cfbot.cputube.org/georgios-kokolatos.html\n\nThank you for pointing it out!\n\nPlease find version 7 attached which hopefully addresses the error along with a proper\nexpansion of the test coverage and removal of recently introduced\nwhitespace warnings.\n\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Justin",
"msg_date": "Thu, 20 Aug 2020 08:16:19 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Thu, Aug 20, 2020 at 08:16:19AM +0000, Georgios wrote:\n> Please find version 7 attached which hopefully addresses the error along with a proper\n> expansion of the test coverage and removal of recently introduced\n> whitespace warnings.\n\n+CREATE ROLE conditional_tableam_display_role;\nAs a convention, regression tests need to have roles prefixed with\n\"regress_\" or this would cause some buildfarm members to turn red.\nPlease see -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS (you could use\nthat in your environment for example).\n\nSo, as of the tests.. The role gets added to make sure that when\nusing \\d+ on the full schema as well as the various \\d*+ variants we\nhave a consistent owner. The addition of the relation size for the\nsequence and the btree index in the output generated is a problem\nthough, because that's not really portable when compiling with other\npage sizes. It is true that there are other tests failing in this\ncase, but I think that we should try to limit that if we can. In\nshort, I agree that having some tests is better than nothing, but I\nwould suggest to reduce their scope, as per the attached.\n\nAdding \\dE as there are no foreign tables does not make much sense,\nand also I wondered why \\dt+ was not added.\n\nDoes the attached look correct to you?\n--\nMichael",
"msg_date": "Tue, 1 Sep 2020 13:41:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Tuesday, 1 September 2020 07:41, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Aug 20, 2020 at 08:16:19AM +0000, Georgios wrote:\n>\n> > Please find version 7 attached which hopefully addresses the error along with a proper\n> > expansion of the test coverage and removal of recently introduced\n> > whitespace warnings.\n>\n> +CREATE ROLE conditional_tableam_display_role;\n> As a convention, regression tests need to have roles prefixed with\n> \"regress_\" or this would cause some buildfarm members to turn red.\n> Please see -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS (you could use\n> that in your environment for example).\n\n\nI was wondering about the name. I hoped that it would either come up during review, or that it would be fine.\n\nThank you for the detailed explanation.\n\n>\n> So, as of the tests.. The role gets added to make sure that when\n> using \\d+ on the full schema as well as the various \\d*+ variants we\n> have a consistent owner. The addition of the relation size for the\n> sequence and the btree index in the output generated is a problem\n> though, because that's not really portable when compiling with other\n> page sizes. It is true that there are other tests failing in this\n> case, but I think that we should try to limit that if we can. In\n> short, I agree that having some tests is better than nothing, but I\n> would suggest to reduce their scope, as per the attached.\n\nI could not agree more. I have only succumbed to the pressure of reviewing.\n\n>\n> Adding \\dE as there are no foreign tables does not make much sense,\n> and also I wondered why \\dt+ was not added.\n>\n> Does the attached look correct to you?\n\nYou have my :+1:\n\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Michael\n\n\n\n\n",
"msg_date": "Tue, 01 Sep 2020 10:27:31 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include access method in listTables output"
},
{
"msg_contents": "On Tue, Sep 01, 2020 at 10:27:31AM +0000, Georgios wrote:\n> On Tuesday, 1 September 2020 07:41, Michael Paquier <michael@paquier.xyz> wrote:\n>> Adding \\dE as there are no foreign tables does not make much sense,\n>> and also I wondered why \\dt+ was not added.\n>>\n>> Does the attached look correct to you?\n> \n> You have my :+1:\n\nOK, applied then.\n--\nMichael",
"msg_date": "Wed, 2 Sep 2020 17:06:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Include access method in listTables output"
}
] |
[
{
"msg_contents": "During the discussion in [0] I noticed that the extract()/date_part() \nvariants for time, timetz, and interval had virtually no test coverage. \nSo I put some more tests together, which should be useful if we decide \nto make any changes in this area per [0].\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9%40phystech.edu\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 9 Jun 2020 13:36:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "text coverage for EXTRACT()"
},
{
"msg_contents": "On 6/9/20 1:36 PM, Peter Eisentraut wrote:\n> During the discussion in [0] I noticed that the extract()/date_part()\n> variants for time, timetz, and interval had virtually no test coverage.\n> So I put some more tests together, which should be useful if we decide\n> to make any changes in this area per [0].\n\n\nThese look straightforward to me.\n\nLooking at that big table, I see everything is 0-based except the\nquarter. That seems unfortunate, and if this were a new feature I'd\nlobby to have it changed. I don't think we can do anything about it\nnow, though.\n-- \nVik Fearing\n\n\n",
"msg_date": "Tue, 9 Jun 2020 15:02:45 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: text coverage for EXTRACT()"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 6/9/20 1:36 PM, Peter Eisentraut wrote:\n>> During the discussion in [0] I noticed that the extract()/date_part()\n>> variants for time, timetz, and interval had virtually no test coverage.\n>> So I put some more tests together, which should be useful if we decide\n>> to make any changes in this area per [0].\n\n> These look straightforward to me.\n\n+1 here as well.\n\n> Looking at that big table, I see everything is 0-based except the\n> quarter. That seems unfortunate, and if this were a new feature I'd\n> lobby to have it changed. I don't think we can do anything about it\n> now, though.\n\nYeah, that ship has sailed :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jun 2020 10:11:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: text coverage for EXTRACT()"
},
{
"msg_contents": "On 2020-06-09 16:11, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> On 6/9/20 1:36 PM, Peter Eisentraut wrote:\n>>> During the discussion in [0] I noticed that the extract()/date_part()\n>>> variants for time, timetz, and interval had virtually no test coverage.\n>>> So I put some more tests together, which should be useful if we decide\n>>> to make any changes in this area per [0].\n> \n>> These look straightforward to me.\n> \n> +1 here as well.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 14 Jun 2020 08:18:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: text coverage for EXTRACT()"
}
] |
[
{
"msg_contents": "There is this for loop in mul_var() :\n/*\n * Add the appropriate multiple of var2 into the accumulator.\n *\n * As above, digits of var2 can be ignored if they don't contribute,\n * so we only include digits for which i1+i2+2 <= res_ndigits - 1.\n */\nfor (i2 = Min(var2ndigits - 1, res_ndigits - i1 - 3), i = i1 + i2 + 2;\n i2 >= 0; i2--)\n dig[i--] += var1digit * var2digits[i2];\n\nWith gcc -O3, the above for loop, if simplified, gets auto-vectorized\n[1] ; and this results in speedups for multiplication of PostgreSQL\nnumeric types having large precisions. The speedups start becoming\nnoticeable from around 50 precision onwards. With 50 precision the\nimprovement I saw was 5%, with 60 11%, 120 50%, 240 2.2x, and so on.\nOn my arm64 machine, a similar benefit starts showing up from 20\nprecision onwards. I used this query from regress/sql/numeric_big.sql\n:\nSELECT t1.val * t2.val FROM num_data t1, num_data t2\nIf I use the schema created by numeric_big.sql, the speedup was 2.5x\nto 2.7x across three machines.\n\nAlso, the regress/sql/numeric_big test itself speeds up by 80%\n\nFor the for loop to be auto-vectorized, I had to simplify it to\nsomething like this :\ni2 = Min(var2ndigits - 1, res_ndigits - i1 - 3);\ndigptr = &dig[i1 + 2];\nfor (i = 0; i <= i2; i++)\n digptr[i] += var1digit * var2digits[i];\n\ngcc also can vectorize backward loop such as this :\nfor (i = n-1; i >= 0; i--)\n a += b[i];\ngcc -fopt-info-all gives this info :\nnumeric.c:7217:3: optimized: loop vectorized using 16 byte vectors\n\nBut if the assignment is not as simple as above, it does not vectorize\nthe backward traversal :\ni2 = Min(var2ndigits - 1, res_ndigits - i1 - 3);\ndigptr = &dig[i1 + i2 + 2];\nfor (; i2 >= 0; i2--)\n digptr[i2] += var1digit * var2digits[i2];\nnumeric.c:7380:3: missed: couldn't vectorize loop\nnumeric.c:7381:15: missed: not vectorized: relevant stmt not\nsupported: _39 = *_38;\n\nEven for forward loop traversal, the below can't be vectorized\nseemingly because it involves two variables :\ncount = Min(var2ndigits - 1, res_ndigits - i1 - 3) + 1;\ni = i1 + i2 - count + 3;\nfor (i2 = 0; i2 < count; i++, i2++)\n dig[i] += var1digit * var2digits[i2];\nnumeric.c:7394:3: missed: couldn't vectorize loop\nnumeric.c:7395:11: missed: not vectorized: not suitable for gather\nload _37 = *_36;\n\nSo it's better to keep the loop simple :\ni2 = Min(var2ndigits - 1, res_ndigits - i1 - 3);\ndigptr = &dig[i1 + 2];\nfor (i = 0; i <= i2; i++)\n digptr[i] += var1digit * var2digits[i];\nnumeric.c:7387:3: optimized: loop vectorized using 16 byte vectors\n\nAttached is the patch that uses the above loop.\n\nWith the patch, in mul_var() assembly code, I could see the\nmultiply-accumulate instructions that operate on SIMD vectors (these\nare arm64 instructions) :\n smlal v1.4s, v2.4h, v3.4h\n smlal2 v0.4s, v2.8h, v3.8h\n\n\nI extracted the \"SELECT t1.val * t2.val FROM num_data t1, num_data\nt2\" query from regress/sql/numeric_big.sql, and ran it on the data\nthat the test creates (it inserts values with precisions ranging from\n500 to 700). Attached is create_schema.sql which creates the\nregression test schema.\nWith this query, below are the changes in mul_var() figures with and\nwithout patch :\n(All the below figures are with -O3 build.)\n\nHEAD :\n\n+ 64.06% postgres postgres [.] mul_var\n+ 13.00% postgres postgres [.] get_str_from_var\n+ 6.32% postgres [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore\n+ 1.65% postgres [kernel.kallsyms] [k] copy_user_enhanced_fast_string\n+ 1.10% postgres [kernel.kallsyms] [k] _raw_spin_lock\n+ 0.96% postgres [kernel.kallsyms] [k] get_page_from_freelist\n+ 0.73% postgres [kernel.kallsyms] [k] page_counter_try_charge\n+ 0.64% postgres postgres [.] AllocSetAlloc\n\nPatched :\n\n+ 35.91% postgres postgres [.] mul_var\n+ 20.43% postgres postgres [.] get_str_from_var\n+ 13.01% postgres [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore\n+ 2.31% postgres [kernel.kallsyms] [k] copy_user_enhanced_fast_string\n+ 1.48% postgres [kernel.kallsyms] [k] _raw_spin_lock\n+ 1.15% postgres [kernel.kallsyms] [k] get_page_from_freelist\n+ 0.99% postgres postgres [.] AllocSetAlloc\n+ 0.58% postgres postgres [.] base_yyparse\n\nTimes in milliseconds for SELECT t1.val * t2.val FROM num_data t1,\nnum_data t2 :\nMachine 1 (amd64)\nHead : .668 .723 .658 .660\nPatched : .288 .280 .282 .282\nMachine 2 (arm64)\nHead : .897 .879 .888 .897\nPatched : .329 .324 .321 .320\n\nAverage times in milliseconds for numeric_big regression test :\nMachine 1 (amd64)\nHead : 801\nPatched : 445\nMachine 2 (arm64)\nHead : 1105\nPatched : 550\n\n\ngcc -O3 option :\n\nI understand we have kept the default gcc CFLAGS to -O2, because, I\nbelieve, we might enable some bugs due to some assumptions in the\ncode, if we make it -O3. But with this patch, we allow products built\nwith -O3 flag to get this benefit.\n\nThe actual gcc option to enable auto-vectorization is\n-ftree-loop-vectorize. But for -O3 it is always true. What we can do\nin the future is to have a separate file that has such optimized code\nthat is proven to work with such optimization flags, and enable the\nrequired compiler flags only for such files, if the build is done with\n-O2.\n\n[1] https://gcc.gnu.org/projects/tree-ssa/vectorization.html#using\n\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies",
"msg_date": "Tue, 9 Jun 2020 17:20:25 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "On 2020-06-09 13:50, Amit Khandekar wrote:\n> Also, the regress/sql/numeric_big test itself speeds up by 80%\n\nThat's nice. I can confirm the speedup:\n\n-O3 without the patch:\n\n numeric ... ok 737 ms\ntest numeric_big ... ok 1014 ms\n\n-O3 with the patch:\n\n numeric ... ok 680 ms\ntest numeric_big ... ok 580 ms\n\nAlso:\n\n-O2 without the patch:\n\n numeric ... ok 693 ms\ntest numeric_big ... ok 1160 ms\n\n-O2 with the patch:\n\n numeric ... ok 677 ms\ntest numeric_big ... ok 917 ms\n\nSo the patch helps either way. But it also seems that without the \npatch, -O3 might be a bit slower in some cases. This might need more \ntesting.\n\n> For the for loop to be auto-vectorized, I had to simplify it to\n> something like this :\n\nWell, how do we make sure we keep it that way? How do we prevent some \nrandom rearranging of the code or some random compiler change to break \nthis again?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Jun 2020 00:50:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "On Wed, 10 Jun 2020 at 04:20, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-06-09 13:50, Amit Khandekar wrote:\n> > Also, the regress/sql/numeric_big test itself speeds up by 80%\n>\n> That's nice. I can confirm the speedup:\n>\n> -O3 without the patch:\n>\n> numeric ... ok 737 ms\n> test numeric_big ... ok 1014 ms\n>\n> -O3 with the patch:\n>\n> numeric ... ok 680 ms\n> test numeric_big ... ok 580 ms\n>\n> Also:\n>\n> -O2 without the patch:\n>\n> numeric ... ok 693 ms\n> test numeric_big ... ok 1160 ms\n>\n> -O2 with the patch:\n>\n> numeric ... ok 677 ms\n> test numeric_big ... ok 917 ms\n>\n> So the patch helps either way.\n\nOh, I didn't observe that the patch helps numeric_big.sql to speed up\nto some extent even with -O2. For me, it takes 805 on head and 732 ms\nwith patch. One possible reason that I can think of is : Because of\nthe forward loop traversal, pre-fetching might be helping. But this is\njust a wild guess.\n\n-O3 : HEAD\ntest numeric ... ok 102 ms\ntest numeric_big ... ok 803 ms\n\n-O3 : patched :\ntest numeric ... ok 100 ms\ntest numeric_big ... ok 450 ms\n\n\n-O2 : HEAD\ntest numeric ... ok 100 ms\ntest numeric_big ... ok 805 ms\n\n-O2 patched :\ntest numeric ... ok 103 ms\ntest numeric_big ... ok 732 ms\n\n> But it also seems that without the patch, -O3 might\n> be a bit slower in some cases. This might need more testing.\n\nFor me, there is no observed change in the times with -O2 versus -O3,\non head. Are you getting a consistent slower numeric*.sql tests with\n-O3 on current code ? Not sure what might be the reason.\nBut this is not related to the patch. Is it with the context of patch\nthat you are suggesting that it might need more testing ? There might\nbe existing cases that might be running a bit slower with O3, but\nthat's strange actually. Probably optimization in those cases might\nnot be working as thought by the compiler, and in fact they might be\nworking negatively. Probably that's one of the reasons -O2 is the\ndefault choice.\n\n\n>\n> > For the for loop to be auto-vectorized, I had to simplify it to\n> > something like this :\n>\n> Well, how do we make sure we keep it that way? How do we prevent some\n> random rearranging of the code or some random compiler change to break\n> this again?\n\nI believe the compiler rearranges the code segments w.r.t. one another\nwhen those are independent of each other. I guess the compiler is able\nto identify that. With our case, it's the for loop. There are some\nvariables used inside it, and that would prevent it from moving the\nfor loop. Even if the compiler finds it safe to move relative to\nsurrounding code, it would not spilt the for loop contents themselves,\nso the for loop will remain intact, and so would the vectorization,\nalthough it seems to keep an unrolled version of the loop in case it\nis called with smaller iteration counts. But yes, if someone in the\nfuture tries to change the for loop, it would possibly break the\nauto-vectorization. So, we should have appropriate comments (patch has\nthose). Let me know if you find any possible reasons due to which the\ncompiler would in the future stop the vectorization even when there is\nno change in the for loop.\n\nIt might look safer if we take the for loop out into an inline\nfunction; just to play it safe ?\n\n\n",
"msg_date": "Wed, 10 Jun 2020 17:45:56 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "FYI : this one is present in the July commitfest.\n\n\n",
"msg_date": "Thu, 9 Jul 2020 10:58:20 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "On 2020-06-10 14:15, Amit Khandekar wrote:\n>> Well, how do we make sure we keep it that way? How do we prevent some\n>> random rearranging of the code or some random compiler change to break\n>> this again?\n> I believe the compiler rearranges the code segments w.r.t. one another\n> when those are independent of each other. I guess the compiler is able\n> to identify that. With our case, it's the for loop. There are some\n> variables used inside it, and that would prevent it from moving the\n> for loop. Even if the compiler finds it safe to move relative to\n> surrounding code, it would not spilt the for loop contents themselves,\n> so the for loop will remain intact, and so would the vectorization,\n> although it seems to keep an unrolled version of the loop in case it\n> is called with smaller iteration counts. But yes, if someone in the\n> future tries to change the for loop, it would possibly break the\n> auto-vectorization. So, we should have appropriate comments (patch has\n> those). Let me know if you find any possible reasons due to which the\n> compiler would in the future stop the vectorization even when there is\n> no change in the for loop.\n\nWe normally don't compile with -O3, so very few users would get the \nbenefit of this. We have CFLAGS_VECTOR for the checksum code. I \nsuppose if we are making the numeric code vectorizable as well, we \nshould apply this there also.\n\nI think we need a bit of a policy decision from the group here.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 Jul 2020 15:05:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> We normally don't compile with -O3, so very few users would get the \n> benefit of this.\n\nYeah. I don't think changing that baseline globally would be a wise move.\n\n> We have CFLAGS_VECTOR for the checksum code. I \n> suppose if we are making the numeric code vectorizable as well, we \n> should apply this there also.\n\n> I think we need a bit of a policy decision from the group here.\n\nI'd vote in favor of applying CFLAGS_VECTOR to specific source files\nthat can benefit. We already have experience with that and we've not\ndetected any destabilization potential.\n\n(I've not looked at this patch, so don't take this as a +1 for this\nspecific patch.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jul 2020 09:32:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "On Fri, 10 Jul 2020 at 19:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > We normally don't compile with -O3, so very few users would get the\n> > benefit of this.\n>\n> Yeah. I don't think changing that baseline globally would be a wise move.\n>\n> > We have CFLAGS_VECTOR for the checksum code. I\n> > suppose if we are making the numeric code vectorizable as well, we\n> > should apply this there also.\n>\n> > I think we need a bit of a policy decision from the group here.\n>\n> I'd vote in favor of applying CFLAGS_VECTOR to specific source files\n> that can benefit. We already have experience with that and we've not\n> detected any destabilization potential.\n\nI tried this in utils/adt/Makefile :\n+\n+numeric.o: CFLAGS += ${CFLAGS_VECTOR}\n+\nand it works.\n\nCFLAGS_VECTOR also includes the -funroll-loops option, which I\nbelieve, had showed improvements in the checksum.c runs ( [1] ). This\noption makes the object file a bit bigger. For numeric.o, it's size\nincreased by 15K; from 116672 to 131360 bytes. I ran the\nmultiplication test, and didn't see any additional speed-up with this\noption. Also, it does not seem to be related to vectorization. So I\nwas thinking of splitting the CFLAGS_VECTOR into CFLAGS_VECTOR and\nCFLAGS_UNROLL_LOOPS. Checksum.c can use both these flags, and\nnumeric.c can use only CFLAGS_VECTOR.\n\nI was also wondering if it's worth to extract only the code that we\nthink can be optimized and keep it in a separate file (say\nnumeric_vectorize.c or adt_vectorize.c, which can have mul_var() to\nstart with), and use this file as a collection of all such code in the\nadt module, and then we can add such files into other modules as and\nwhen needed. For numeric.c, there can be already some scope for\nauto-vectorizations in other similar regions in that file, so we don't\nrequire a separate numeric_vectorize.c and just pass the CFLAGS_VECTOR\nflag for this file itself.\n\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BU5nML8JYeGqM-k4eEwNJi5H%3DU57oPLBsBDoZUv4cfcmdnpUA%40mail.gmail.com#2ec419817ff429588dd1229fb663080e\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n",
"msg_date": "Mon, 13 Jul 2020 14:27:19 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "On Mon, 13 Jul 2020 at 14:27, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> I tried this in utils/adt/Makefile :\n> +\n> +numeric.o: CFLAGS += ${CFLAGS_VECTOR}\n> +\n> and it works.\n>\n> CFLAGS_VECTOR also includes the -funroll-loops option, which I\n> believe, had showed improvements in the checksum.c runs ( [1] ). This\n> option makes the object file a bit bigger. For numeric.o, it's size\n> increased by 15K; from 116672 to 131360 bytes. I ran the\n> multiplication test, and didn't see any additional speed-up with this\n> option. Also, it does not seem to be related to vectorization. So I\n> was thinking of splitting the CFLAGS_VECTOR into CFLAGS_VECTOR and\n> CFLAGS_UNROLL_LOOPS. Checksum.c can use both these flags, and\n> numeric.c can use only CFLAGS_VECTOR.\n\nI did as above. Attached is the v2 patch.\n\nIn case of existing CFLAGS_VECTOR, an env variable also could be set\nby that name when running configure. I did the same for\nCFLAGS_UNROLL_LOOPS.\n\nNow, developers who already are using CFLAGS_VECTOR env while\nconfigur'ing might be using this env because their compilers don't\nhave these compiler options so they must be using some equivalent\ncompiler options. numeric.c will now be compiled with CFLAGS_VECTOR,\nso for them it will now be compiled with their equivalent of\nvectorize and unroll-loops option, which is ok, I think. Just that the\nnumeric.o size will be increased, that's it.\n\n>\n> [1] https://www.postgresql.org/message-id/flat/CA%2BU5nML8JYeGqM-k4eEwNJi5H%3DU57oPLBsBDoZUv4cfcmdnpUA%40mail.gmail.com#2ec419817ff429588dd1229fb663080e\n\n\n\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies",
"msg_date": "Tue, 21 Jul 2020 14:46:18 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> I did as above. Attached is the v2 patch.\n\nI made some cosmetic changes to this and committed it. AFAICT,\nthere's no measurable performance change to the \"numeric\" regression\ntest, but I got a solid 45% speedup on \"numeric_big\", so it's\nclearly a win for wider arithmetic cases.\n\nIt seemed to me to be useful to actually rename CFLAGS_VECTOR\nif we're changing its meaning, so I made it CFLAGS_VECTORIZE.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Sep 2020 21:44:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "I wrote:\n> I made some cosmetic changes to this and committed it.\n\nBTW, poking at this further, it seems that the patch only really\nworks for gcc. clang accepts the -ftree-vectorize switch, but\nlooking at the generated asm shows that it does nothing useful.\nWhich is odd, because clang does do loop vectorization.\n\nI tried adding -Rpass-analysis=loop-vectorize and got\n\nnumeric.c:8341:3: remark: loop not vectorized: could not determine number of loop iterations [-Rpass-analysis=loop-vectorize]\n for (i2 = 0; i2 <= i; i2++)\n ^\n\nwhich is interesting but I don't know how to proceed further.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Sep 2020 01:53:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "On Mon, 7 Sep 2020 at 11:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > I made some cosmetic changes to this and committed it.\n\nThanks!\n\n>\n> BTW, poking at this further, it seems that the patch only really\n> works for gcc. clang accepts the -ftree-vectorize switch, but\n> looking at the generated asm shows that it does nothing useful.\n> Which is odd, because clang does do loop vectorization.\n>\n> I tried adding -Rpass-analysis=loop-vectorize and got\n>\n> numeric.c:8341:3: remark: loop not vectorized: could not determine number of loop iterations [-Rpass-analysis=loop-vectorize]\n> for (i2 = 0; i2 <= i; i2++)\n\nHmm, yeah that's unfortunate. My guess is that the compiler would do\nvectorization only if 'i' is a constant, which is not true for our\ncase.\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n",
"msg_date": "Mon, 7 Sep 2020 12:40:49 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> On Mon, 7 Sep 2020 at 11:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, poking at this further, it seems that the patch only really\n>> works for gcc. clang accepts the -ftree-vectorize switch, but\n>> looking at the generated asm shows that it does nothing useful.\n>> Which is odd, because clang does do loop vectorization.\n\n> Hmm, yeah that's unfortunate. My guess is that the compiler would do\n> vectorization only if 'i' is a constant, which is not true for our\n> case.\n\nNo, they claim to handle variable trip counts, per\n\nhttps://llvm.org/docs/Vectorizers.html#loops-with-unknown-trip-count\n\nI experimented with a few different ideas such as adding restrict\ndecoration to the pointers, and eventually found that what works\nis to write the loop termination condition as \"i2 < limit\"\nrather than \"i2 <= limit\". It took me a long time to think of\ntrying that, because it seemed ridiculously stupid. But it works.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Sep 2020 12:07:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "I wrote:\n> I experimented with a few different ideas such as adding restrict\n> decoration to the pointers, and eventually found that what works\n> is to write the loop termination condition as \"i2 < limit\"\n> rather than \"i2 <= limit\". It took me a long time to think of\n> trying that, because it seemed ridiculously stupid. But it works.\n\nI've done more testing and confirmed that both gcc and clang can\nvectorize the improved loop on aarch64 as well as x86_64. (clang's\nresults can be confusing because -ftree-vectorize doesn't seem to\nhave any effect: its vectorizer is on by default. But if you use\n-fno-vectorize it'll go back to the old, slower code.)\n\nThe only buildfarm effect I've noticed is that locust and\nprairiedog, which are using nearly the same ancient gcc version,\ncomplain\n\nc1: warning: -ftree-vectorize enables strict aliasing. -fno-strict-aliasing is ignored when Auto Vectorization is used.\n\nwhich is expected (they say the same for checksum.c), but then\nthere are a bunch of\n\nwarning: dereferencing type-punned pointer will break strict-aliasing rules\n\nwhich seems worrisome. (This sort of thing is the reason I'm\nhesitant to apply higher optimization levels across the board.)\nBoth animals pass the regression tests anyway, but if any other\ncompilers treat -ftree-vectorize as an excuse to apply stricter\noptimization assumptions, we could be in for trouble.\n\nI looked closer and saw that all of those warnings are about\ninit_var(), and this change makes them go away:\n\n-#define init_var(v) MemSetAligned(v, 0, sizeof(NumericVar))\n+#define init_var(v) memset(v, 0, sizeof(NumericVar))\n\nI'm a little inclined to commit that as future-proofing. It's\nessentially reversing out a micro-optimization I made in d72f6c750.\nI doubt I had hard evidence that it made any noticeable difference;\nand even if it did back then, modern compilers probably prefer the\nmemset approach.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Sep 2020 16:49:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "On Tue, 8 Sep 2020 at 02:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > I experimented with a few different ideas such as adding restrict\n> > decoration to the pointers, and eventually found that what works\n> > is to write the loop termination condition as \"i2 < limit\"\n> > rather than \"i2 <= limit\". It took me a long time to think of\n> > trying that, because it seemed ridiculously stupid. But it works.\n\nAh ok.\n\nI checked the \"Auto-Vectorization in LLVM\" link that you shared. All\nthe examples use \"< n\" or \"> n\". None of them use \"<= n\". Looks like a\nhidden restriction.\n\n>\n> I've done more testing and confirmed that both gcc and clang can\n> vectorize the improved loop on aarch64 as well as x86_64. (clang's\n> results can be confusing because -ftree-vectorize doesn't seem to\n> have any effect: its vectorizer is on by default. But if you use\n> -fno-vectorize it'll go back to the old, slower code.)\n>\n> The only buildfarm effect I've noticed is that locust and\n> prairiedog, which are using nearly the same ancient gcc version,\n> complain\n>\n> c1: warning: -ftree-vectorize enables strict aliasing. -fno-strict-aliasing is ignored when Auto Vectorization is used.\n>\n> which is expected (they say the same for checksum.c), but then\n> there are a bunch of\n>\n> warning: dereferencing type-punned pointer will break strict-aliasing rules\n>\n> which seems worrisome. (This sort of thing is the reason I'm\n> hesitant to apply higher optimization levels across the board.)\n> Both animals pass the regression tests anyway, but if any other\n> compilers treat -ftree-vectorize as an excuse to apply stricter\n> optimization assumptions, we could be in for trouble.\n>\n> I looked closer and saw that all of those warnings are about\n> init_var(), and this change makes them go away:\n>\n> -#define init_var(v) MemSetAligned(v, 0, sizeof(NumericVar))\n> +#define init_var(v) memset(v, 0, sizeof(NumericVar))\n>\n> I'm a little inclined to commit that as future-proofing. It's\n> essentially reversing out a micro-optimization I made in d72f6c750.\n> I doubt I had hard evidence that it made any noticeable difference;\n> and even if it did back then, modern compilers probably prefer the\n> memset approach.\n\nThanks. I must admit it did not occur to me that I could have very\nwell installed clang on my linux machine and tried compiling this\nfile, or tested with some older gcc versions. I think I was using gcc\n8. Do you know what was the gcc compiler version that gave these\nwarnings ?\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n",
"msg_date": "Tue, 8 Sep 2020 11:20:06 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
},
{
"msg_contents": "Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> Thanks. I must admit it did not occur to me that I could have very\n> well installed clang on my linux machine and tried compiling this\n> file, or tested with some older gcc versions. I think I was using gcc\n> 8. Do you know what was the gcc compiler version that gave these\n> warnings ?\n\nPer the buildfarm's configure logs, prairiedog is using\n\nconfigure: using compiler=powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5341)\n\nIIRC, locust has a newer build number but it's the same underlying gcc\nversion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Sep 2020 09:49:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Auto-vectorization speeds up multiplication of large-precision\n numerics"
}
] |
[
{
"msg_contents": "Hi,\n\nWe would like to know when we will get Postgres installer with latest pgAdmin 4.22 bundled.\nThe latest Postgres installer (12.3) has only 4.21 which doesn't have fix for certain vulnerabilities related to python.\n\nRegards,\nJoel\n\n\n\n\n\n\n\n\n\nHi,\n \nWe would like to know when we will get Postgres installer with latest pgAdmin 4.22 bundled.\nThe latest Postgres installer (12.3) has only 4.21 which doesn’t have fix for certain vulnerabilities related to python.\n \nRegards,\nJoel",
"msg_date": "Tue, 9 Jun 2020 11:58:33 +0000",
"msg_from": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com>",
"msg_from_op": true,
"msg_subject": "Postgres installer with pgAdmin 4.22 "
},
{
"msg_contents": "On Tuesday, June 9, 2020, Joel Mariadasan (jomariad) <jomariad@cisco.com>\nwrote:\n>\n> We would like to know when we will get Postgres installer with latest\n> pgAdmin 4.22 bundled.\n>\n> The latest Postgres installer (12.3) has only 4.21 which doesn’t have fix\n> for certain vulnerabilities related to python.\n>\n>\nAsk the provider of your installer. The core project, this list, only\nsupports installation from source code; and furthermore the pgAdmin project\nis developed by an independent group.\n\nDavid J.\n\nOn Tuesday, June 9, 2020, Joel Mariadasan (jomariad) <jomariad@cisco.com> wrote:\nWe would like to know when we will get Postgres installer with latest pgAdmin 4.22 bundled.\nThe latest Postgres installer (12.3) has only 4.21 which doesn’t have fix for certain vulnerabilities related to python.Ask the provider of your installer. The core project, this list, only supports installation from source code; and furthermore the pgAdmin project is developed by an independent group.David J.",
"msg_date": "Tue, 9 Jun 2020 07:25:01 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres installer with pgAdmin 4.22"
}
] |
[
{
"msg_contents": "Hi All,\nprologue of pg_dump.c says\n * Note that pg_dump runs in a transaction-snapshot mode transaction,\n * so it sees a consistent snapshot of the database including system\n * catalogs. However, it relies in part on various specialized backend\n * functions like pg_get_indexdef(), and those things tend to look at\n * the currently committed state. So it is possible to get 'cache\n * lookup failed' error if someone performs DDL changes while a dump is\n * happening. The window for this sort of thing is from the acquisition\n * of the transaction snapshot to getSchemaData() (when pg_dump acquires\n * AccessShareLock on every table it intends to dump). It isn't very large,\n * but it can happen.\n\nBut this possible failure has not been documented at\nhttps://www.postgresql.org/docs/12/app-pgdump.html. Although the\nwindow for failure is small, it's not impossible.Should we document\nthis kind of failure?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 9 Jun 2020 19:30:13 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_dump and concurrent DDL activity"
}
] |
[
{
"msg_contents": "Hi,\n\nHere is a small patch to fix function names.\n\nSome pg_xlog_replay_resume() was still left in po file.\nFixed these into pg_wal_replay_resume().\n\n\nBest regards,\n--\nKazuki Uehara",
"msg_date": "Wed, 10 Jun 2020 00:53:37 +0900",
"msg_from": "U ikki <uehakz@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fixed the remaining old function names."
},
{
"msg_contents": "On 2020-06-09 17:53, U ikki wrote:\n> Here is a small patch to fix function names.\n> \n> Some pg_xlog_replay_resume() was still left in po file.\n> Fixed these into pg_wal_replay_resume().\n\nThere is a separate process for updating .po files; see \nbabel.postgresql.org.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Jun 2020 00:24:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixed the remaining old function names."
},
{
"msg_contents": "Thanks for comments.\n\n> There is a separate process for updating .po files; see\n> babel.postgresql.org.\n\nI'll check it.\n\nregards,\n\n2020年6月10日(水) 9:00 U ikki <uehakz@gmail.com>:\n>\n> Thanks for comments.\n>\n> > There is a separate process for updating .po files; see\n> > babel.postgresql.org.\n>\n> I'll check it.\n>\n> regards,\n>\n> 2020年6月10日(水) 7:24 Peter Eisentraut <peter.eisentraut@2ndquadrant.com>:\n> >\n> > On 2020-06-09 17:53, U ikki wrote:\n> > > Here is a small patch to fix function names.\n> > >\n> > > Some pg_xlog_replay_resume() was still left in po file.\n> > > Fixed these into pg_wal_replay_resume().\n> >\n> > There is a separate process for updating .po files; see\n> > babel.postgresql.org.\n> >\n> > --\n> > Peter Eisentraut http://www.2ndQuadrant.com/\n> > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Jun 2020 09:01:43 +0900",
"msg_from": "U ikki <uehakz@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixed the remaining old function names."
}
] |
[
{
"msg_contents": "It should be BLCKSZ and LOBLKSIZE, as in the attached.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 10 Jun 2020 17:17:31 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "typos in comments referring to macros"
},
{
"msg_contents": "On Wed, Jun 10, 2020 at 2:47 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> It should be BLCKSZ and LOBLKSIZE, as in the attached.\n>\n\nLGTM on first look. I'll push either later today or tomorrow.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jun 2020 15:19:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: typos in comments referring to macros"
},
{
"msg_contents": "On Wed, Jun 10, 2020 at 3:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 10, 2020 at 2:47 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> >\n> > It should be BLCKSZ and LOBLKSIZE, as in the attached.\n> >\n>\n> LGTM on first look. I'll push either later today or tomorrow.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jun 2020 15:34:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: typos in comments referring to macros"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgres create table statement supports `LIKE source_table [like_option... ]`\nto specify `a table from which the new table automatically copies all column\nnames, their data types, and their not-null constraints.` according to\ndocumentation [1].\n\nI am wondering if a similar clause would make sense to copy relation wide\nsettings. For example consider a relation created like this:\n\n`CREATE TABLE source_table ([column, ...]) USING customam WITH (storage_parameter1 = value1, ... )`\n\nMaybe a statement similar to:\n\n`CREATE TABLE target LIKE source_table`\n\nwhich should be equivalent to:\n\n`CREATE TABLE target (LIKE source_table INCLUDING ALL) USING customam WITH (storage_parameter1 = value1, ...)`\n\ncan be usefull as a syntactic shortcut. Maybe the usefulness of such sortcut\nbecomes a bit more apparent if one considers that custom access methods can\noffer a diversity of storage parameters that interact both at relation and\ncolumn level, especially when the source relation is column oriented.\n\nIf the possibility for such a statment is not discarded, a patch can be readily\nprovided.\n\nCheers,\n//Georgios\n\n[1] https://www.postgresql.org/docs/13/sql-createtable.html\n\n\n\n",
"msg_date": "Wed, 10 Jun 2020 09:42:35 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Relation wide 'LIKE' clause"
},
{
"msg_contents": "On 2020-06-10 11:42, Georgios wrote:\n> Postgres create table statement supports `LIKE source_table [like_option... ]`\n> to specify `a table from which the new table automatically copies all column\n> names, their data types, and their not-null constraints.` according to\n> documentation [1].\n> \n> I am wondering if a similar clause would make sense to copy relation wide\n> settings. For example consider a relation created like this:\n> \n> `CREATE TABLE source_table ([column, ...]) USING customam WITH (storage_parameter1 = value1, ... )`\n\nWe already have LIKE INCLUDING STORAGE. Maybe that should just be \nupdated to work like what you are asking for.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Jun 2020 12:08:23 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Relation wide 'LIKE' clause"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, June 10, 2020 12:08 PM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-06-10 11:42, Georgios wrote:\n>\n> > Postgres create table statement supports `LIKE source_table [like_option... ]`\n> > to specify `a table from which the new table automatically copies all column names, their data types, and their not-null constraints.` according to\n> > documentation [1].\n> > I am wondering if a similar clause would make sense to copy relation wide\n> > settings. For example consider a relation created like this:\n> > `CREATE TABLE source_table ([column, ...]) USING customam WITH (storage_parameter1 = value1, ... )`\n>\n> We already have LIKE INCLUDING STORAGE. Maybe that should just be\n> updated to work like what you are asking for.\n\nThis is correct. However I do see some limitations there. Consider the valid scenario:\n\n`CREATE TABLE target (LIKE source_am1 INCLUDING ALL, LIKE source_am2 INCLUDING ALL)`\n\nWhich source relation should be used for the access method and storage parameters?\n\nAlso I _think_ that the current `LIKE` clause specifically targets column definitions\nin the SQL standard. I am a bit hesitant on the last part, yet this is my\ncurrent understanding.\n\nPlease, let me know what you think.\n\n>\n> ------------------------------------------------------------------------------------------------------------------\n>\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n",
"msg_date": "Wed, 10 Jun 2020 10:17:44 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Relation wide 'LIKE' clause"
}
] |
[
{
"msg_contents": "walkdir is used indirectly in the abort handler of SharedFileSetOnDetach, which has the following comment:\n\n/*\n* Callback function that will be invoked when this backend detaches from a\n* DSM segment holding a SharedFileSet that it has created or attached to. If\n* we are the last to detach, then try to remove the directories and\n* everything in them. We can't raise an error on failures, because this runs\n* in error cleanup paths.\n*/\n\nwalkdir itself has elevel, which is set to LOG in that case, so it should ot throw an ERROR.\n\nHowever, since walkdir calls AllocateDir this is not always true. AllocateDir throws an ERROR in two places:\n\n 1. https://github.com/postgres/postgres/blob/5a4ada71a8f944600c348a6e4f5feb388ba8bd37/src/backend/storage/file/fd.c#L2590-L2593\n 2. and inderictly in reserveAllocatedDesc https://github.com/postgres/postgres/blob/5a4ada71a8f944600c348a6e4f5feb388ba8bd37/src/backend/storage/file/fd.c#L2266-L2268\n\nThe fix seems simple enough: AllocateDir and reserveAllocatedDesc should take an elevel argument and honor that. To not change the signature of AllocateDir and possibly break extions, it could simply become a wrapper of a new function like AllocateDirWithElevel(dirname, ERROR).\n\n\n\n\n\n\n\n\nwalkdir is used indirectly in the abort handler of SharedFileSetOnDetach, which has the following comment:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n/*\n\n\n\n\n\n\n\n\n* Callback function \nthat will be invoked when\nthis backend detaches from \na\n\n\n\n\n\n\n\n\n* DSM segment holding\na SharedFileSet that it \nhas created or attached to.\nIf\n\n\n\n\n\n\n\n\n* we are the \nlast to detach, then try\nto remove the directories \nand\n\n\n\n\n\n\n\n\n* everything in them.\nWe can't raise an error\non failures, because this \nruns\n\n\n\n\n\n\n\n\n* in error cleanup\npaths.\n\n\n\n\n\n*/\n\n\n\n\nwalkdir itself has elevel, which is set to LOG in that case, so it should ot throw an ERROR.\n\n\n\n\nHowever, since walkdir calls AllocateDir this is not always true. AllocateDir throws an ERROR in two places:\n\n\nhttps://github.com/postgres/postgres/blob/5a4ada71a8f944600c348a6e4f5feb388ba8bd37/src/backend/storage/file/fd.c#L2590-L2593and inderictly in reserveAllocatedDesc \nhttps://github.com/postgres/postgres/blob/5a4ada71a8f944600c348a6e4f5feb388ba8bd37/src/backend/storage/file/fd.c#L2266-L2268\n\n\n\nThe fix seems simple enough: AllocateDir and reserveAllocatedDesc should take an elevel argument and honor that. To not change the signature of AllocateDir and possibly break extions, it could simply become a wrapper of a new function like AllocateDirWithElevel(dirname,\n ERROR).",
"msg_date": "Wed, 10 Jun 2020 12:04:07 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "walkdir does not honor elevel because of call to AllocateDir,\n possibly causing issues in abort handler"
}
] |
[
{
"msg_contents": "Hello,\n\nI am trying to understand/optimize how a COPY operation behaves when\ntransfering a bytea from the database to a client.\n\nFor simplicity, I'll consider that I have only one bytea _image_ in the\n_images_ table.\n\nStarting with\nCOPY (SELECT image FROM images) TO STDOUT BINARY\n\nI understand that :\n - the server will create a linear buffer on the server side holding the\nwhole image and then start sending it over the network in one big copyData\nmessage chunked in 64KB network chunks\n - the client can manage to extract this copyData payload by re-assembling\nthose chunks in memory or by streaming the relevant data parts of the\nchunks elsewhere.\n\nso the problem I see in a streaming situation is that the server actually\nneeds to buffer the whole image in memory.\n\nNow the image is already compressed so if I\nALTER TABLE images ALTER image SET STORAGE EXTERNAL\nI can use the fact that substring on non compressed toasted values will\nfetch only the needed parts and do\n\nCOPY (\n SELECT (\n SELECT substring(image from n for 65536) from images)\n FROM generate_series(1, (select length(image) from images), 65536) n\n ) TO STDOUT BINARY\n\nAs I understand it, this would be less memory intensive on the server side\nif the server starts sending rows before all rows of the subselect are\nbuilt because it would only need to prepare a sequence of 65536 bytes long\nbuffers for the rows it would decide to have in memory.\n\nbut is there a way to know if such a COPY/SELECT statement will indeed\nstart sending rows before they are all prepared on the server ? Does it\ndepend on the request and is there a difference if I add an order by on\nthe select versus the natural order of the table ?\nHow many rows will be needed in memory before the sending begins ?\n\nI hope my explanation was clear. I am looking for help in better\nunderstanding how the server decides to stream the COPY data out of the\nserver vs the internal retrieval of the COPY'd subselect.\n\nThank you\nJérôme\n\nHello,I am trying to understand/optimize how a COPY operation behaves when transfering a bytea from the database to a client.For simplicity, I'll consider that I have only one bytea _image_ in the _images_ table.Starting withCOPY (SELECT image FROM images) TO STDOUT BINARYI understand that : - the server will create a linear buffer on the server side holding the whole image and then start sending it over the network in one big copyData message chunked in 64KB network chunks - the client can manage to extract this copyData payload by re-assembling those chunks in memory or by streaming the relevant data parts of the chunks elsewhere.so the problem I see in a streaming situation is that the server actually needs to buffer the whole image in memory.Now the image is already compressed so if IALTER TABLE images ALTER image SET STORAGE EXTERNALI can use the fact that substring on non compressed toasted values will fetch only the needed parts and doCOPY ( SELECT ( SELECT substring(image from n for 65536) from images) FROM generate_series(1, (select length(image) from images), \n\n\n\n65536) n ) TO STDOUT BINARYAs I understand it, this would be less memory intensive on the server side if the server starts sending rows before all rows of the subselect are built because it would only need to prepare a sequence of 65536 bytes long buffers for the rows it would decide to have in memory.but is there a way to know if such a COPY/SELECT statement will indeed start sending rows before they are all prepared on the server ? Does it depend on the request and is there a difference if I add an order by on the select versus the natural order of the table ?\n\nHow many rows will be needed in memory before the sending begins ?I hope my explanation was clear. I am looking for help in better understanding how the server decides to stream the COPY data out of the server vs the internal retrieval of the COPY'd subselect.Thank youJérôme",
"msg_date": "Wed, 10 Jun 2020 14:52:02 +0200",
"msg_from": "Jerome Wagner <jerome.wagner@laposte.net>",
"msg_from_op": true,
"msg_subject": "COPY, bytea streaming and memory footprint"
}
] |
[
{
"msg_contents": "Hi,\n\nI've just had some thoughts about the possible usefulness of having\nthe buildfarm record the run-time of each regression test to allow us\nto have some sort of ability to track the run-time history of each\ntest.\n\nI thought the usefulness might be two-fold:\n\n1. We could quickly identify when someone adds some overly complex\ntest and slows down the regression tests too much.\n2. We might get some faster insight into performance regressions.\n\nI can think of about 3 reasons that a test might slow down.\n\na) Someone adds some new tests within the test file.\nb) Actual performance regression in Postgres\nc) Animal busy with other work.\n\nWe likely could do a semi-decent job of telling a) and b) apart by\njust recording the latest commit that changed the .sql file for the\ntest. We could also likely see when c) is at play by the results\nreturning back to normal again a few runs after some spike. We'd only\nwant to pay attention to consistent slowdowns. Perhaps there would be\ntoo much variability with the parallel tests, but maybe we could just\nrecord it for the serial tests in make check-world.\n\nI only thought of this after reading [1]. If we went ahead with that,\nas of now, it feels like someone could quite easily break that\noptimisation and nobody would notice for a long time.\n\nI admit to not having looked at the buildfarm code to determine how\npractical such a change would be. I've assumed there is a central\ndatabase that stores all the results.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAJ3gD9eEXJ2CHMSiOehvpTZu3Ap2GMi5jaXhoZuW=3XJLmZWpw@mail.gmail.com\n\n\n",
"msg_date": "Thu, 11 Jun 2020 00:58:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Recording test runtimes with the buildfarm"
},
{
"msg_contents": "Greetings,\n\n* David Rowley (dgrowleyml@gmail.com) wrote:\n> 1. We could quickly identify when someone adds some overly complex\n> test and slows down the regression tests too much.\n\nSure, makes sense to me. We do track the individual 'stage_duration'\nbut we don't track things down to a per-regression-test basis. To do\nthat, I think we'd need the regression system to spit that kind of\ndetailed information out somewhere (in a structured format) that the\nbuildfarm client would then be able to pick it up and send to the server\nto write into an appropriate table.\n\n> 2. We might get some faster insight into performance regressions.\n\nThere's some effort going into continuing to build out a \"performance\"\nfarm, whose specific goal is to try and help exactly this issue. Trying\nto do that with the buildfarm has the challenge that many of those\nsystems aren't dedicated and therefore the timing could vary wildly\nbetween runs due to entirely independent things than our code.\n\nWould certainly be great to have more people working on that. Currently\nit's primarily Ilaria (GSoC student), Ads and I.\n\nCurrent repo is https://github.com/PGPerfFarm/pgperffarm if folks want\nto look at it, but we're in the process of making some pretty serious\nchanges, so now might not be the best time to look at the code. We're\ncoordinating on the 'Postgresteam' slack in #perffarm for anyone\ninterested tho.\n\n> I only thought of this after reading [1]. If we went ahead with that,\n> as of now, it feels like someone could quite easily break that\n> optimisation and nobody would notice for a long time.\n\nOne of the goals with the perffarm is to be able to support different\ntypes of benchmarks, beyond just pgbench, so that we'd be able to add a\nbenchmark for \"numeric\", perhaps, or maybe create a script with pgbench\nthat ends up being heavy on numerics or such.\n\n> I admit to not having looked at the buildfarm code to determine how\n> practical such a change would be. I've assumed there is a central\n> database that stores all the results.\n\nYes, there's a central database where results are pushed and that's what\nyou see when you go to buildfarm.postgresql.org, there's also an archive\nserver which has the logs going much farther back (and is quite a bit\nlarger, of course..).\n\nThanks,\n\nStephen",
"msg_date": "Wed, 10 Jun 2020 09:17:43 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "\nOn 6/10/20 8:58 AM, David Rowley wrote:\n> Hi,\n>\n> I've just had some thoughts about the possible usefulness of having\n> the buildfarm record the run-time of each regression test to allow us\n> to have some sort of ability to track the run-time history of each\n> test.\n>\n> I thought the usefulness might be two-fold:\n>\n> 1. We could quickly identify when someone adds some overly complex\n> test and slows down the regression tests too much.\n> 2. We might get some faster insight into performance regressions.\n>\n> I can think of about 3 reasons that a test might slow down.\n>\n> a) Someone adds some new tests within the test file.\n> b) Actual performance regression in Postgres\n> c) Animal busy with other work.\n>\n> We likely could do a semi-decent job of telling a) and b) apart by\n> just recording the latest commit that changed the .sql file for the\n> test. We could also likely see when c) is at play by the results\n> returning back to normal again a few runs after some spike. We'd only\n> want to pay attention to consistent slowdowns. Perhaps there would be\n> too much variability with the parallel tests, but maybe we could just\n> record it for the serial tests in make check-world.\n>\n> I only thought of this after reading [1]. If we went ahead with that,\n> as of now, it feels like someone could quite easily break that\n> optimisation and nobody would notice for a long time.\n>\n> I admit to not having looked at the buildfarm code to determine how\n> practical such a change would be. I've assumed there is a central\n> database that stores all the results.\n>\n\nDavid,\n\n\nYes we have a central database. But we don't have anything that digs\ninto each step. In fact, on the server side the code isn't even really\naware that it's Postgres that's being tested.\n\n\nThe basic structures are:\n\n\n buildsystems - one row per animal\n\n build_status - one row per run\n\n build_status_log - one row per step within each run\n\n\nThe last table has a blob containing the output of the step (e.g. \"make\ncheck\") plus any relevant files (like the postmaster.log).\n\nBut we don't have any structure corresponding to an individual\nregression test.\n\nArchitecturally, we could add a table for named times, and have the\nclient extract and report them.\n\nAlternatively, people with access to the database could extract the logs\nand post-process them using perl or python. That would involve no work\non my part :-) But it would not be automated.\n\nWhat we do record (in build_status_log) is the time each step took. So\nany regression test that suddenly blew out should likewise cause a\nblowout in the time the whole \"make check\" took. It might be possible to\ncreate a page that puts stats like that up. I think that's probably a\nbetter way to go.\n\nTying results to an individual commit is harder. There could be dozens\nof commits that happened between the current run and the previous run on\na given animal. But usually what has caused any event is fairly clear\nwhen you look at it.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 10 Jun 2020 09:57:29 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Alternatively, people with access to the database could extract the logs\n> and post-process them using perl or python. That would involve no work\n> on my part :-) But it would not be automated.\n\nYeah, we could easily extract per-test-script runtimes, since pg_regress\nstarted to print those. But ...\n\n> What we do record (in build_status_log) is the time each step took. So\n> any regression test that suddenly blew out should likewise cause a\n> blowout in the time the whole \"make check\" took.\n\nI have in the past scraped the latter results and tried to make sense of\nthem. They are *mighty* noisy, even when considering just one animal\nthat I know to be running on a machine with little else to do. Maybe\naveraging across the whole buildfarm could reduce the noise level, but\nI'm not very hopeful. Per-test-script times would likely be even\nnoisier (ISTM anyway, maybe I'm wrong).\n\nThe entire reason we've been discussing a separate performance farm\nis the expectation that buildfarm timings will be too noisy to be\nuseful to detect any but the most obvious performance effects.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jun 2020 10:13:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "\nOn 6/10/20 10:13 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> Alternatively, people with access to the database could extract the logs\n>> and post-process them using perl or python. That would involve no work\n>> on my part :-) But it would not be automated.\n> Yeah, we could easily extract per-test-script runtimes, since pg_regress\n> started to print those. But ...\n>\n>> What we do record (in build_status_log) is the time each step took. So\n>> any regression test that suddenly blew out should likewise cause a\n>> blowout in the time the whole \"make check\" took.\n> I have in the past scraped the latter results and tried to make sense of\n> them. They are *mighty* noisy, even when considering just one animal\n> that I know to be running on a machine with little else to do. Maybe\n> averaging across the whole buildfarm could reduce the noise level, but\n> I'm not very hopeful. Per-test-script times would likely be even\n> noisier (ISTM anyway, maybe I'm wrong).\n>\n> The entire reason we've been discussing a separate performance farm\n> is the expectation that buildfarm timings will be too noisy to be\n> useful to detect any but the most obvious performance effects.\n>\n> \t\t\t\n\n\n\nYes, but will the performance farm be testing regression timings?\n\n\nMaybe we're going to need several test suites, one of which could be\nregression tests. But the regression tests themselves are not really\nintended for performance testing.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 10 Jun 2020 12:18:55 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "Greetings,\n\n* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:\n> On 6/10/20 10:13 AM, Tom Lane wrote:\n> > Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> >> Alternatively, people with access to the database could extract the logs\n> >> and post-process them using perl or python. That would involve no work\n> >> on my part :-) But it would not be automated.\n> > Yeah, we could easily extract per-test-script runtimes, since pg_regress\n> > started to print those. But ...\n> >\n> >> What we do record (in build_status_log) is the time each step took. So\n> >> any regression test that suddenly blew out should likewise cause a\n> >> blowout in the time the whole \"make check\" took.\n> > I have in the past scraped the latter results and tried to make sense of\n> > them. They are *mighty* noisy, even when considering just one animal\n> > that I know to be running on a machine with little else to do. Maybe\n> > averaging across the whole buildfarm could reduce the noise level, but\n> > I'm not very hopeful. Per-test-script times would likely be even\n> > noisier (ISTM anyway, maybe I'm wrong).\n> >\n> > The entire reason we've been discussing a separate performance farm\n> > is the expectation that buildfarm timings will be too noisy to be\n> > useful to detect any but the most obvious performance effects.\n> \n> Yes, but will the performance farm be testing regression timings?\n\nWe are not currently envisioning that, no.\n\n> Maybe we're going to need several test suites, one of which could be\n> regression tests. But the regression tests themselves are not really\n> intended for performance testing.\n\nAgree with this- better would be tests which are specifically written to\ntest performance instead.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 10 Jun 2020 12:21:40 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "On Thu, 11 Jun 2020 at 02:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I have in the past scraped the latter results and tried to make sense of\n> them. They are *mighty* noisy, even when considering just one animal\n> that I know to be running on a machine with little else to do.\n\nDo you recall if you looked at the parallel results or the serially\nexecuted ones?\n\nI imagine that the parallel ones will have much more noise since we\nrun the tests up to 20 at a time. I imagine probably none, or at most\nnot many of the animals have enough CPU cores not to be context\nswitching a lot during those the parallel runs. I thought the serial\nones would be better but didn't have an idea of they'd be good enough\nto be useful.\n\nDavid\n\n\n",
"msg_date": "Thu, 11 Jun 2020 09:13:51 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 2:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I have in the past scraped the latter results and tried to make sense of\n> them. They are *mighty* noisy, even when considering just one animal\n> that I know to be running on a machine with little else to do. Maybe\n> averaging across the whole buildfarm could reduce the noise level, but\n> I'm not very hopeful. Per-test-script times would likely be even\n> noisier (ISTM anyway, maybe I'm wrong).\n\nI've been doing that in a little database that pulls down the results\nand analyses them with primitive regexes. First I wanted to know the\npass/fail history for each individual regression, isolation and TAP\nscript, then I wanted to build something that could identify tests\nthat are 'flapping', and work out when the started and stopped\nflapping etc. I soon realised it was all too noisy, but then I\nfigured that I could fix that by detecting crashes. So I classify\nevery top level build farm run as SUCCESS, FAILURE or CRASH. If the\ntop level run was CRASH, than I can disregard the individual per\nscript results, because they're all BS.\n\n\n",
"msg_date": "Thu, 11 Jun 2020 09:43:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 11 Jun 2020 at 02:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I have in the past scraped the latter results and tried to make sense of\n>> them. They are *mighty* noisy, even when considering just one animal\n>> that I know to be running on a machine with little else to do.\n\n> Do you recall if you looked at the parallel results or the serially\n> executed ones?\n\n> I imagine that the parallel ones will have much more noise since we\n> run the tests up to 20 at a time. I imagine probably none, or at most\n> not many of the animals have enough CPU cores not to be context\n> switching a lot during those the parallel runs. I thought the serial\n> ones would be better but didn't have an idea of they'd be good enough\n> to be useful.\n\nI can't claim to recall specifically, but I agree with your theory\nabout that, so I probably looked at the serial-schedule case.\n\nNote that this is moot for animals using use_installcheck_parallel\n... but it looks like that's still just a minority of them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jun 2020 17:56:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 9:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jun 11, 2020 at 2:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I have in the past scraped the latter results and tried to make sense of\n> > them. They are *mighty* noisy, even when considering just one animal\n> > that I know to be running on a machine with little else to do. Maybe\n> > averaging across the whole buildfarm could reduce the noise level, but\n> > I'm not very hopeful. Per-test-script times would likely be even\n> > noisier (ISTM anyway, maybe I'm wrong).\n>\n> I've been doing that in a little database that pulls down the results\n> and analyses them with primitive regexes. First I wanted to know the\n> pass/fail history for each individual regression, isolation and TAP\n> script, then I wanted to build something that could identify tests\n> that are 'flapping', and work out when the started and stopped\n> flapping etc. I soon realised it was all too noisy, but then I\n> figured that I could fix that by detecting crashes. So I classify\n> every top level build farm run as SUCCESS, FAILURE or CRASH. If the\n> top level run was CRASH, than I can disregard the individual per\n> script results, because they're all BS.\n\nWith more coffee I realise that you were talking about noise times,\nnot noisy pass/fail results. But I still want to throw that idea out\nthere, if we're considering analysing the logs.\n\n\n",
"msg_date": "Thu, 11 Jun 2020 10:00:15 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "Hi, \n\nOn June 10, 2020 2:13:51 PM PDT, David Rowley <dgrowleyml@gmail.com> wrote:\n>On Thu, 11 Jun 2020 at 02:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I have in the past scraped the latter results and tried to make sense\n>of\n>> them. They are *mighty* noisy, even when considering just one animal\n>> that I know to be running on a machine with little else to do.\n>\n>Do you recall if you looked at the parallel results or the serially\n>executed ones?\n>\n>I imagine that the parallel ones will have much more noise since we\n>run the tests up to 20 at a time. I imagine probably none, or at most\n>not many of the animals have enough CPU cores not to be context\n>switching a lot during those the parallel runs. I thought the serial\n>ones would be better but didn't have an idea of they'd be good enough\n>to be useful.\n\nI'd assume that a rolling average (maybe 10 runs or so) would hide noise enough to see at least some trends even for parallel runs.\n\nWe should be able to prototype this with a few queries over the bf database, right?\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 10 Jun 2020 15:00:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I've been doing that in a little database that pulls down the results\n> and analyses them with primitive regexes. First I wanted to know the\n> pass/fail history for each individual regression, isolation and TAP\n> script, then I wanted to build something that could identify tests\n> that are 'flapping', and work out when the started and stopped\n> flapping etc. I soon realised it was all too noisy, but then I\n> figured that I could fix that by detecting crashes. So I classify\n> every top level build farm run as SUCCESS, FAILURE or CRASH. If the\n> top level run was CRASH, than I can disregard the individual per\n> script results, because they're all BS.\n\nIf you can pin the crash on a particular test script, it'd be useful\nto track that as a kind of failure. In general, though, both crashes\nand non-crash failures tend to cause collateral damage to later test\nscripts --- if you can't filter that out then the later scripts will\nhave high false-positive rates.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jun 2020 18:02:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "On Thu, 11 Jun 2020 at 10:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I've been doing that in a little database that pulls down the results\n> > and analyses them with primitive regexes. First I wanted to know the\n> > pass/fail history for each individual regression, isolation and TAP\n> > script, then I wanted to build something that could identify tests\n> > that are 'flapping', and work out when the started and stopped\n> > flapping etc. I soon realised it was all too noisy, but then I\n> > figured that I could fix that by detecting crashes. So I classify\n> > every top level build farm run as SUCCESS, FAILURE or CRASH. If the\n> > top level run was CRASH, than I can disregard the individual per\n> > script results, because they're all BS.\n>\n> If you can pin the crash on a particular test script, it'd be useful\n> to track that as a kind of failure. In general, though, both crashes\n> and non-crash failures tend to cause collateral damage to later test\n> scripts --- if you can't filter that out then the later scripts will\n> have high false-positive rates.\n\nI guess the fact that you've both needed to do analysis on individual\ntests shows that there might be a call for this beyond just recording\nthe test's runtime.\n\nIf we had a table that stored the individual test details, pass/fail\nand just stored the timing information along with that, then, even if\nthe timing was unstable, it could still be useful for some analysis.\nI'd be happy enough even if that was only available as a csv file\ndownload. I imagine the buildfarm does not need to provide us with\nany tools for doing analysis on this. Ideally, there would be some\nrun_id that we could link it back to the test run which would give us\nthe commit SHA, and the animal that it ran on. Joining to details\nabout the animal could be useful too, e.g perhaps a certain test\nalways fails on 32-bit machines.\n\nI suppose that maybe we could modify pg_regress to add a command line\noption to have it write out a machine-readable file, e.g:\ntestname,result,runtime\\n, then just have the buildfarm client ship\nthat off to the buildfarm server to record in the database.\n\nDavid\n\n\n",
"msg_date": "Thu, 11 Jun 2020 15:29:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "On 6/10/20 6:00 PM, Andres Freund wrote:\n> On June 10, 2020 2:13:51 PM PDT, David Rowley <dgrowleyml@gmail.com> wrote:\n>>On Thu, 11 Jun 2020 at 02:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I have in the past scraped the latter results and tried to make sense\n>>of\n>>> them. They are *mighty* noisy, even when considering just one animal\n>>> that I know to be running on a machine with little else to do.\n>>\n>>Do you recall if you looked at the parallel results or the serially\n>>executed ones?\n>>\n>>I imagine that the parallel ones will have much more noise since we\n>>run the tests up to 20 at a time. I imagine probably none, or at most\n>>not many of the animals have enough CPU cores not to be context\n>>switching a lot during those the parallel runs. I thought the serial\n>>ones would be better but didn't have an idea of they'd be good enough\n>>to be useful.\n> \n> I'd assume that a rolling average (maybe 10 runs or so) would hide noise enough to see at least some trends even for parallel runs.\n> \n> We should be able to prototype this with a few queries over the bf database, right?\n\n\nThis seems to me like a perfect use case for control charts:\n\n https://en.wikipedia.org/wiki/Control_chart\n\nThey are designed specifically to detect systematic changes in an environment\nwith random noise.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Thu, 11 Jun 2020 07:33:10 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "Greetings,\n\n* David Rowley (dgrowleyml@gmail.com) wrote:\n> On Thu, 11 Jun 2020 at 10:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > I've been doing that in a little database that pulls down the results\n> > > and analyses them with primitive regexes. First I wanted to know the\n> > > pass/fail history for each individual regression, isolation and TAP\n> > > script, then I wanted to build something that could identify tests\n> > > that are 'flapping', and work out when the started and stopped\n> > > flapping etc. I soon realised it was all too noisy, but then I\n> > > figured that I could fix that by detecting crashes. So I classify\n> > > every top level build farm run as SUCCESS, FAILURE or CRASH. If the\n> > > top level run was CRASH, than I can disregard the individual per\n> > > script results, because they're all BS.\n> >\n> > If you can pin the crash on a particular test script, it'd be useful\n> > to track that as a kind of failure. In general, though, both crashes\n> > and non-crash failures tend to cause collateral damage to later test\n> > scripts --- if you can't filter that out then the later scripts will\n> > have high false-positive rates.\n> \n> I guess the fact that you've both needed to do analysis on individual\n> tests shows that there might be a call for this beyond just recording\n> the test's runtime.\n> \n> If we had a table that stored the individual test details, pass/fail\n> and just stored the timing information along with that, then, even if\n> the timing was unstable, it could still be useful for some analysis.\n> I'd be happy enough even if that was only available as a csv file\n> download. I imagine the buildfarm does not need to provide us with\n> any tools for doing analysis on this. Ideally, there would be some\n> run_id that we could link it back to the test run which would give us\n> the commit SHA, and the animal that it ran on. Joining to details\n> about the animal could be useful too, e.g perhaps a certain test\n> always fails on 32-bit machines.\n> \n> I suppose that maybe we could modify pg_regress to add a command line\n> option to have it write out a machine-readable file, e.g:\n> testname,result,runtime\\n, then just have the buildfarm client ship\n> that off to the buildfarm server to record in the database.\n\nThat seems like it'd be the best approach to me, though I'd defer to\nAndrew on it.\n\nBy the way, if you'd like access to the buildfarm archive server where\nall this stuff is stored, that can certainly be arranged, just let me\nknow.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 11 Jun 2020 10:21:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "\nOn 6/11/20 10:21 AM, Stephen Frost wrote:\n> Greetings,\n>\n> * David Rowley (dgrowleyml@gmail.com) wrote:\n>> On Thu, 11 Jun 2020 at 10:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Thomas Munro <thomas.munro@gmail.com> writes:\n>>>> I've been doing that in a little database that pulls down the results\n>>>> and analyses them with primitive regexes. First I wanted to know the\n>>>> pass/fail history for each individual regression, isolation and TAP\n>>>> script, then I wanted to build something that could identify tests\n>>>> that are 'flapping', and work out when the started and stopped\n>>>> flapping etc. I soon realised it was all too noisy, but then I\n>>>> figured that I could fix that by detecting crashes. So I classify\n>>>> every top level build farm run as SUCCESS, FAILURE or CRASH. If the\n>>>> top level run was CRASH, than I can disregard the individual per\n>>>> script results, because they're all BS.\n>>> If you can pin the crash on a particular test script, it'd be useful\n>>> to track that as a kind of failure. In general, though, both crashes\n>>> and non-crash failures tend to cause collateral damage to later test\n>>> scripts --- if you can't filter that out then the later scripts will\n>>> have high false-positive rates.\n>> I guess the fact that you've both needed to do analysis on individual\n>> tests shows that there might be a call for this beyond just recording\n>> the test's runtime.\n>>\n>> If we had a table that stored the individual test details, pass/fail\n>> and just stored the timing information along with that, then, even if\n>> the timing was unstable, it could still be useful for some analysis.\n>> I'd be happy enough even if that was only available as a csv file\n>> download. I imagine the buildfarm does not need to provide us with\n>> any tools for doing analysis on this. Ideally, there would be some\n>> run_id that we could link it back to the test run which would give us\n>> the commit SHA, and the animal that it ran on. Joining to details\n>> about the animal could be useful too, e.g perhaps a certain test\n>> always fails on 32-bit machines.\n>>\n>> I suppose that maybe we could modify pg_regress to add a command line\n>> option to have it write out a machine-readable file, e.g:\n>> testname,result,runtime\\n, then just have the buildfarm client ship\n>> that off to the buildfarm server to record in the database.\n> That seems like it'd be the best approach to me, though I'd defer to\n> Andrew on it.\n>\n> By the way, if you'd like access to the buildfarm archive server where\n> all this stuff is stored, that can certainly be arranged, just let me\n> know.\n>\n\n\nYeah, we'll need to work out where to stash the file. The client will\npick up anything in src/regress/log for \"make check\", but would need\nadjusting for other steps that invoke pg_regress. I'm getting close to\ncutting a new client release, but I can delay it till we settle this.\n\n\nOn the server side, we could add a table with a key of <animal,\nsnapshot, branch, step, testname> but we'd need to make sure those test\nnames were unique. Maybe we need a way of telling pg_regress to prepend\na module name (e.g. btree_gist ot plperl) to the test name.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jun 2020 10:55:49 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 4:56 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 6/11/20 10:21 AM, Stephen Frost wrote:\n> > Greetings,\n> >\n> > * David Rowley (dgrowleyml@gmail.com) wrote:\n> >> On Thu, 11 Jun 2020 at 10:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Thomas Munro <thomas.munro@gmail.com> writes:\n> >>>> I've been doing that in a little database that pulls down the results\n> >>>> and analyses them with primitive regexes. First I wanted to know the\n> >>>> pass/fail history for each individual regression, isolation and TAP\n> >>>> script, then I wanted to build something that could identify tests\n> >>>> that are 'flapping', and work out when the started and stopped\n> >>>> flapping etc. I soon realised it was all too noisy, but then I\n> >>>> figured that I could fix that by detecting crashes. So I classify\n> >>>> every top level build farm run as SUCCESS, FAILURE or CRASH. If the\n> >>>> top level run was CRASH, than I can disregard the individual per\n> >>>> script results, because they're all BS.\n> >>> If you can pin the crash on a particular test script, it'd be useful\n> >>> to track that as a kind of failure. In general, though, both crashes\n> >>> and non-crash failures tend to cause collateral damage to later test\n> >>> scripts --- if you can't filter that out then the later scripts will\n> >>> have high false-positive rates.\n> >> I guess the fact that you've both needed to do analysis on individual\n> >> tests shows that there might be a call for this beyond just recording\n> >> the test's runtime.\n> >>\n> >> If we had a table that stored the individual test details, pass/fail\n> >> and just stored the timing information along with that, then, even if\n> >> the timing was unstable, it could still be useful for some analysis.\n> >> I'd be happy enough even if that was only available as a csv file\n> >> download. I imagine the buildfarm does not need to provide us with\n> >> any tools for doing analysis on this. Ideally, there would be some\n> >> run_id that we could link it back to the test run which would give us\n> >> the commit SHA, and the animal that it ran on. Joining to details\n> >> about the animal could be useful too, e.g perhaps a certain test\n> >> always fails on 32-bit machines.\n> >>\n> >> I suppose that maybe we could modify pg_regress to add a command line\n> >> option to have it write out a machine-readable file, e.g:\n> >> testname,result,runtime\\n, then just have the buildfarm client ship\n> >> that off to the buildfarm server to record in the database.\n> > That seems like it'd be the best approach to me, though I'd defer to\n> > Andrew on it.\n> >\n> > By the way, if you'd like access to the buildfarm archive server where\n> > all this stuff is stored, that can certainly be arranged, just let me\n> > know.\n> >\n>\n>\n> Yeah, we'll need to work out where to stash the file. The client will\n> pick up anything in src/regress/log for \"make check\", but would need\n> adjusting for other steps that invoke pg_regress. I'm getting close to\n> cutting a new client release, but I can delay it till we settle this.\n>\n>\n> On the server side, we could add a table with a key of <animal,\n> snapshot, branch, step, testname> but we'd need to make sure those test\n> names were unique. Maybe we need a way of telling pg_regress to prepend\n> a module name (e.g. btree_gist ot plperl) to the test name.\n>\n\nIt seems pretty trivial to for example get all the steps out of check.log\nand their timing with a regexp. I just used '^(?:test)?\\s+(\\S+)\\s+\\.\\.\\.\nok\\s+(\\d+) ms$' as the regexp. Running that against a few hundred build\nruns in the db generally looks fine, though I didn't look into it in\ndetail.\n\nOf course, that only looked at check.log, and more logic would be needed if\nwe want to look into the other areas as well, but as long as it's\npg_regress output I think it should be easy?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jun 11, 2020 at 4:56 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 6/11/20 10:21 AM, Stephen Frost wrote:\n> Greetings,\n>\n> * David Rowley (dgrowleyml@gmail.com) wrote:\n>> On Thu, 11 Jun 2020 at 10:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Thomas Munro <thomas.munro@gmail.com> writes:\n>>>> I've been doing that in a little database that pulls down the results\n>>>> and analyses them with primitive regexes. First I wanted to know the\n>>>> pass/fail history for each individual regression, isolation and TAP\n>>>> script, then I wanted to build something that could identify tests\n>>>> that are 'flapping', and work out when the started and stopped\n>>>> flapping etc. I soon realised it was all too noisy, but then I\n>>>> figured that I could fix that by detecting crashes. So I classify\n>>>> every top level build farm run as SUCCESS, FAILURE or CRASH. If the\n>>>> top level run was CRASH, than I can disregard the individual per\n>>>> script results, because they're all BS.\n>>> If you can pin the crash on a particular test script, it'd be useful\n>>> to track that as a kind of failure. In general, though, both crashes\n>>> and non-crash failures tend to cause collateral damage to later test\n>>> scripts --- if you can't filter that out then the later scripts will\n>>> have high false-positive rates.\n>> I guess the fact that you've both needed to do analysis on individual\n>> tests shows that there might be a call for this beyond just recording\n>> the test's runtime.\n>>\n>> If we had a table that stored the individual test details, pass/fail\n>> and just stored the timing information along with that, then, even if\n>> the timing was unstable, it could still be useful for some analysis.\n>> I'd be happy enough even if that was only available as a csv file\n>> download. I imagine the buildfarm does not need to provide us with\n>> any tools for doing analysis on this. Ideally, there would be some\n>> run_id that we could link it back to the test run which would give us\n>> the commit SHA, and the animal that it ran on. Joining to details\n>> about the animal could be useful too, e.g perhaps a certain test\n>> always fails on 32-bit machines.\n>>\n>> I suppose that maybe we could modify pg_regress to add a command line\n>> option to have it write out a machine-readable file, e.g:\n>> testname,result,runtime\\n, then just have the buildfarm client ship\n>> that off to the buildfarm server to record in the database.\n> That seems like it'd be the best approach to me, though I'd defer to\n> Andrew on it.\n>\n> By the way, if you'd like access to the buildfarm archive server where\n> all this stuff is stored, that can certainly be arranged, just let me\n> know.\n>\n\n\nYeah, we'll need to work out where to stash the file. The client will\npick up anything in src/regress/log for \"make check\", but would need\nadjusting for other steps that invoke pg_regress. I'm getting close to\ncutting a new client release, but I can delay it till we settle this.\n\n\nOn the server side, we could add a table with a key of <animal,\nsnapshot, branch, step, testname> but we'd need to make sure those test\nnames were unique. Maybe we need a way of telling pg_regress to prepend\na module name (e.g. btree_gist ot plperl) to the test name.It seems pretty trivial to for example get all the steps out of check.log and their timing with a regexp. I just used '^(?:test)?\\s+(\\S+)\\s+\\.\\.\\. ok\\s+(\\d+) ms$' as the regexp. Running that against a few hundred build runs in the db generally looks fine, though I didn't look into it in detail. Of course, that only looked at check.log, and more logic would be needed if we want to look into the other areas as well, but as long as it's pg_regress output I think it should be easy?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 11 Jun 2020 17:27:57 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Thu, Jun 11, 2020 at 4:56 PM Andrew Dunstan <\n> andrew.dunstan@2ndquadrant.com> wrote:\n>> Yeah, we'll need to work out where to stash the file. The client will\n>> pick up anything in src/regress/log for \"make check\", but would need\n>> adjusting for other steps that invoke pg_regress. I'm getting close to\n>> cutting a new client release, but I can delay it till we settle this.\n\n> It seems pretty trivial to for example get all the steps out of check.log\n> and their timing with a regexp.\n\nYeah, I don't see why we can't scrape this data from the existing\nbuildfarm output, at least for the core regression tests. New\ninfrastructure could make it easier/cheaper, but I don't think we\nshould invest in that until we've got a provably useful tool.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 12:32:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 6:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Thu, Jun 11, 2020 at 4:56 PM Andrew Dunstan <\n> > andrew.dunstan@2ndquadrant.com> wrote:\n> >> Yeah, we'll need to work out where to stash the file. The client will\n> >> pick up anything in src/regress/log for \"make check\", but would need\n> >> adjusting for other steps that invoke pg_regress. I'm getting close to\n> >> cutting a new client release, but I can delay it till we settle this.\n>\n> > It seems pretty trivial to for example get all the steps out of check.log\n> > and their timing with a regexp.\n>\n> Yeah, I don't see why we can't scrape this data from the existing\n> buildfarm output, at least for the core regression tests. New\n> infrastructure could make it easier/cheaper, but I don't think we\n> should invest in that until we've got a provably useful tool.\n>\n\nSo spending a few minutes to look at my data, it is not quite that easy if\nwe want to look at contrib checks for example. Both btree_gin and\nbtree_gist have checks called \"int4\" for example, and the aforementioned\nregexp won't pick that up. But that's surely a fixable problem.\n\nAnd perhaps we should at least start off data for just \"make check\" to see\nif it's useful, before spending too much work?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jun 11, 2020 at 6:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> On Thu, Jun 11, 2020 at 4:56 PM Andrew Dunstan <\n> andrew.dunstan@2ndquadrant.com> wrote:\n>> Yeah, we'll need to work out where to stash the file. The client will\n>> pick up anything in src/regress/log for \"make check\", but would need\n>> adjusting for other steps that invoke pg_regress. I'm getting close to\n>> cutting a new client release, but I can delay it till we settle this.\n\n> It seems pretty trivial to for example get all the steps out of check.log\n> and their timing with a regexp.\n\nYeah, I don't see why we can't scrape this data from the existing\nbuildfarm output, at least for the core regression tests. New\ninfrastructure could make it easier/cheaper, but I don't think we\nshould invest in that until we've got a provably useful tool.So spending a few minutes to look at my data, it is not quite that easy if we want to look at contrib checks for example. Both btree_gin and btree_gist have checks called \"int4\" for example, and the aforementioned regexp won't pick that up. But that's surely a fixable problem.And perhaps we should at least start off data for just \"make check\" to see if it's useful, before spending too much work? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 11 Jun 2020 18:48:46 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "\nOn 6/11/20 12:32 PM, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> On Thu, Jun 11, 2020 at 4:56 PM Andrew Dunstan <\n>> andrew.dunstan@2ndquadrant.com> wrote:\n>>> Yeah, we'll need to work out where to stash the file. The client will\n>>> pick up anything in src/regress/log for \"make check\", but would need\n>>> adjusting for other steps that invoke pg_regress. I'm getting close to\n>>> cutting a new client release, but I can delay it till we settle this.\n>> It seems pretty trivial to for example get all the steps out of check.log\n>> and their timing with a regexp.\n> Yeah, I don't see why we can't scrape this data from the existing\n> buildfarm output, at least for the core regression tests. New\n> infrastructure could make it easier/cheaper, but I don't think we\n> should invest in that until we've got a provably useful tool.\n\n\nOK, that makes my life fairly easy. Are you wanting this to be\nautomated, or just left as an exercise for researchers for now?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jun 2020 17:26:49 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 6/11/20 12:32 PM, Tom Lane wrote:\n>> Yeah, I don't see why we can't scrape this data from the existing\n>> buildfarm output, at least for the core regression tests. New\n>> infrastructure could make it easier/cheaper, but I don't think we\n>> should invest in that until we've got a provably useful tool.\n\n> OK, that makes my life fairly easy. Are you wanting this to be\n> automated, or just left as an exercise for researchers for now?\n\nI'm envisioning the latter --- I think somebody should prove that\nuseful results can be obtained before we spend any effort on making it\neasier to gather the input numbers. Mind you, I'm not saying that\nI don't believe it's possible to get good results. But building\ninfrastructure in advance of a solid use-case is a recipe for\nbuilding the wrong thing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 17:44:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "\nHello Tom,\n\n> I have in the past scraped the latter results and tried to make sense of \n> them. They are *mighty* noisy, even when considering just one animal \n> that I know to be running on a machine with little else to do. Maybe \n> averaging across the whole buildfarm could reduce the noise level, but \n> I'm not very hopeful.\n\nI'd try with median instead of average, so that bad cases due to animal \noverloading are ignored.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 14 Jun 2020 09:21:51 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
},
{
"msg_contents": "On Fri, 12 Jun 2020 at 04:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Thu, Jun 11, 2020 at 4:56 PM Andrew Dunstan <\n> > It seems pretty trivial to for example get all the steps out of check.log\n> > and their timing with a regexp.\n>\n> Yeah, I don't see why we can't scrape this data from the existing\n> buildfarm output, at least for the core regression tests. New\n> infrastructure could make it easier/cheaper, but I don't think we\n> should invest in that until we've got a provably useful tool.\n\nLooking at the data mentioned in the logs for install-check-C, it\nseems some animals are quite stable with their timings but others are\nquite unstable.\n\nTaking the lowest half of the timings of each animal,test combination,\nwhere the animal ran the test at least 50 times, the top 3 animals\nwith the most consistent timing are.\n\npostgres=# select animal, avg(stddev) from (select animal,testname,\nstddev(ms) from (select\nanimal,testname,unnest(ar[:array_length(ar,1)/2]) ms from (select\nanimal,testname,array_agg(ms order by ms) ar from run_details group by\nanimal,testname) a)b group by 1,2 having count(*) > 50) c group by 1\norder by avg(stddev) limit 3;\n animal | avg\n------------+-------------------\n mule | 4.750935819647279\n massasauga | 5.410419286413067\n eider | 6.06834118301505\n(3 rows)\n\nAnd the least consistent 3 are:\n\npostgres=# select animal, avg(stddev) from (select animal,testname,\nstddev(ms) from (select\nanimal,testname,unnest(ar[:array_length(ar,1)/2]) ms from (select\nanimal,testname,array_agg(ms order by ms) ar from run_details group by\nanimal,testname) a)b group by 1,2 having count(*) > 50) c group by 1\norder by avg(stddev) desc limit 3;\n animal | avg\n----------+-------------------\n lorikeet | 830.9292062818336\n gharial | 725.9874198764217\n dory | 683.5182140482121\n(3 rows)\n\nIf I look at a random test from mule:\n\npostgres=# select commit,time,result,ms from run_details where animal\n= 'mule' and testname = 'join' order by time desc limit 10;\n commit | time | result | ms\n---------+---------------------+--------+-----\n e532b1d | 2020-06-15 19:30:03 | ok | 563\n 7a3543c | 2020-06-15 15:30:03 | ok | 584\n 3baa7e3 | 2020-06-15 11:30:03 | ok | 596\n 47d4d0c | 2020-06-15 07:30:03 | ok | 533\n decbe2b | 2020-06-14 15:30:04 | ok | 535\n 378badc | 2020-06-14 07:30:04 | ok | 563\n 23cbeda | 2020-06-13 19:30:04 | ok | 557\n 8f5b596 | 2020-06-13 07:30:04 | ok | 553\n 6472572 | 2020-06-13 03:30:04 | ok | 580\n 9a7fccd | 2020-06-12 23:30:04 | ok | 561\n(10 rows)\n\nand from lorikeet:\n\npostgres=# select commit,time,result,ms from run_details where animal\n= 'lorikeet' and testname = 'join' order by time desc limit 10;\n commit | time | result | ms\n---------+---------------------+--------+------\n 47d4d0c | 2020-06-15 08:28:35 | ok | 8890\n decbe2b | 2020-06-14 20:28:33 | ok | 8878\n 378badc | 2020-06-14 08:28:35 | ok | 8854\n cc07264 | 2020-06-14 05:22:59 | ok | 8883\n 8f5b596 | 2020-06-13 10:36:14 | ok | 8942\n 2f48ede | 2020-06-12 20:28:41 | ok | 8904\n ffd2582 | 2020-06-12 08:29:52 | ok | 2016\n 7aa4fb5 | 2020-06-11 23:21:26 | ok | 1939\n ad9291f | 2020-06-11 09:56:48 | ok | 1924\n c2bd1fe | 2020-06-10 23:01:42 | ok | 1873\n(10 rows)\n\nI can supply the data I used for this, just send me an offlist email.\nIt's about 19MB using bzip2.\n\nI didn't look at the make check data and I do see some animals use the\nparallel_schedule for make installcheck, which my regex neglected to\naccount for, so that data is missing from the set.\n\nDavid\n\n\n",
"msg_date": "Tue, 16 Jun 2020 22:00:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Recording test runtimes with the buildfarm"
}
] |
[
{
"msg_contents": "A customer's upgrade failed, and it took me a while to\nfigure out that the problem was that they had set\n\"vacuum_defer_cleanup_age=10000\" on the new cluster.\n\nThe consequence was that the \"vacuumdb --freeze\" that\ntakes place before copying commit log files failed to\nfreeze \"pg_database\".\nThat caused later updates to the table to fail with\n\"Could not open file \"pg_xact/0000\": No such file or directory.\"\n\nI think it would increase the robustness of pg_upgrade to\nforce \"vacuum_defer_cleanup_age\" to 0 on the new cluster.\n\nSuggested patch attached.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 10 Jun 2020 16:07:05 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "pg_upgrade fails if vacuum_defer_cleanup_age > 0"
},
{
"msg_contents": "On Wed, Jun 10, 2020 at 04:07:05PM +0200, Laurenz Albe wrote:\n> A customer's upgrade failed, and it took me a while to\n> figure out that the problem was that they had set\n> \"vacuum_defer_cleanup_age=10000\" on the new cluster.\n> \n> The consequence was that the \"vacuumdb --freeze\" that\n> takes place before copying commit log files failed to\n> freeze \"pg_database\".\n> That caused later updates to the table to fail with\n> \"Could not open file \"pg_xact/0000\": No such file or directory.\"\n> \n> I think it would increase the robustness of pg_upgrade to\n> force \"vacuum_defer_cleanup_age\" to 0 on the new cluster.\n\nWow, I can see setting \"vacuum_defer_cleanup_age\" to a non-zero value as\ncausing big problems for pg_upgrade, and being hard to diagnose. I will\napply this patch to all branches.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sat, 13 Jun 2020 08:46:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails if vacuum_defer_cleanup_age > 0"
},
{
"msg_contents": "On Sat, Jun 13, 2020 at 08:46:36AM -0400, Bruce Momjian wrote:\n> On Wed, Jun 10, 2020 at 04:07:05PM +0200, Laurenz Albe wrote:\n> > A customer's upgrade failed, and it took me a while to\n> > figure out that the problem was that they had set\n> > \"vacuum_defer_cleanup_age=10000\" on the new cluster.\n> > \n> > The consequence was that the \"vacuumdb --freeze\" that\n> > takes place before copying commit log files failed to\n> > freeze \"pg_database\".\n> > That caused later updates to the table to fail with\n> > \"Could not open file \"pg_xact/0000\": No such file or directory.\"\n> > \n> > I think it would increase the robustness of pg_upgrade to\n> > force \"vacuum_defer_cleanup_age\" to 0 on the new cluster.\n> \n> Wow, I can see setting \"vacuum_defer_cleanup_age\" to a non-zero value as\n> causing big problems for pg_upgrade, and being hard to diagnose. I will\n> apply this patch to all branches.\n\nThank you, applied to all supported PG versions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 15 Jun 2020 20:59:56 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails if vacuum_defer_cleanup_age > 0"
},
{
"msg_contents": "On Mon, 2020-06-15 at 20:59 -0400, Bruce Momjian wrote:\n> On Sat, Jun 13, 2020 at 08:46:36AM -0400, Bruce Momjian wrote:\n> > On Wed, Jun 10, 2020 at 04:07:05PM +0200, Laurenz Albe wrote:\n> > > A customer's upgrade failed, and it took me a while to\n> > > figure out that the problem was that they had set\n> > > \"vacuum_defer_cleanup_age=10000\" on the new cluster.\n> > > \n> > > The consequence was that the \"vacuumdb --freeze\" that\n> > > takes place before copying commit log files failed to\n> > > freeze \"pg_database\".\n> > > That caused later updates to the table to fail with\n> > > \"Could not open file \"pg_xact/0000\": No such file or directory.\"\n> > > \n> > > I think it would increase the robustness of pg_upgrade to\n> > > force \"vacuum_defer_cleanup_age\" to 0 on the new cluster.\n>\n> Thank you, applied to all supported PG versions.\n\nThanks for picking this up and taking care of it.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 08:39:57 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails if vacuum_defer_cleanup_age > 0"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 08:39:57AM +0200, Laurenz Albe wrote:\n> On Mon, 2020-06-15 at 20:59 -0400, Bruce Momjian wrote:\n> > On Sat, Jun 13, 2020 at 08:46:36AM -0400, Bruce Momjian wrote:\n> > > On Wed, Jun 10, 2020 at 04:07:05PM +0200, Laurenz Albe wrote:\n> > > > A customer's upgrade failed, and it took me a while to\n> > > > figure out that the problem was that they had set\n> > > > \"vacuum_defer_cleanup_age=10000\" on the new cluster.\n> > > > \n> > > > The consequence was that the \"vacuumdb --freeze\" that\n> > > > takes place before copying commit log files failed to\n> > > > freeze \"pg_database\".\n> > > > That caused later updates to the table to fail with\n> > > > \"Could not open file \"pg_xact/0000\": No such file or directory.\"\n> > > > \n> > > > I think it would increase the robustness of pg_upgrade to\n> > > > force \"vacuum_defer_cleanup_age\" to 0 on the new cluster.\n> >\n> > Thank you, applied to all supported PG versions.\n> \n> Thanks for picking this up and taking care of it.\n\nSure. I never noticed how this setting, when it was added in 2009,\ncould affect pg_uprade, but is certainly can:\n\n\tcommit efc16ea520\n\tAuthor: Simon Riggs <simon@2ndQuadrant.com>\n\tDate: Sat Dec 19 01:32:45 2009 +0000\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 05:22:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails if vacuum_defer_cleanup_age > 0"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nI quickly put together a patch to add INDEX_CLEANUP and TRUNCATE to\r\nvacuumdb before noticing a previous thread for it [0]. My take on it\r\nwas to just name the options --skip-index-cleanup and --skip-truncate.\r\nWhile that does not give you a direct mapping to the corresponding\r\nVACUUM options, it simplifies the patch by avoiding the boolean\r\nparameter parsing stuff altogether.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/CAHGQGwENx3Kvxq0U%2BwkGAdoAd89iaaWo_Pd5LBPUO4AqqhgyYQ%40mail.gmail.com",
"msg_date": "Thu, 11 Jun 2020 00:41:17 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 12:41:17AM +0000, Bossart, Nathan wrote:\n> I quickly put together a patch to add INDEX_CLEANUP and TRUNCATE to\n> vacuumdb before noticing a previous thread for it [0]. My take on it\n> was to just name the options --skip-index-cleanup and --skip-truncate.\n> While that does not give you a direct mapping to the corresponding\n> VACUUM options, it simplifies the patch by avoiding the boolean\n> parameter parsing stuff altogether.\n\nCannot blame you for that. There is little sense to have a pure\nmapping with the options here with some custom boolean parsing. What\nabout naming them --no-index-cleanup and --no-truncate instead? I\nwould suggest to track the option values with variables named like\ndo_truncate and do_index_cleanup. That would be similar with what we\ndo with --no-sync for example.\n--\nMichael",
"msg_date": "Thu, 11 Jun 2020 16:09:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb"
},
{
"msg_contents": "On Thu, 11 Jun 2020 at 09:41, Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> Hi hackers,\n>\n> I quickly put together a patch to add INDEX_CLEANUP and TRUNCATE to\n> vacuumdb before noticing a previous thread for it [0]. My take on it\n> was to just name the options --skip-index-cleanup and --skip-truncate.\n> While that does not give you a direct mapping to the corresponding\n> VACUUM options, it simplifies the patch by avoiding the boolean\n> parameter parsing stuff altogether.\n>\n\nThank you for updating the patch!\n\nI looked at this patch.\n\n@@ -412,6 +434,13 @@ vacuum_one_database(const char *dbname,\nvacuumingOptions *vacopts,\n exit(1);\n }\n\n+ if (vacopts->skip_index_cleanup && PQserverVersion(conn) < 120000)\n+ {\n+ PQfinish(conn);\n+ pg_log_error(\"cannot use the \\\"%s\\\" option on server versions\nolder than PostgreSQL %s\",\n+ \"skip-index-cleanup\", \"12\");\n+ }\n+\n if (vacopts->skip_locked && PQserverVersion(conn) < 120000)\n {\n PQfinish(conn);\n\nexit(1) is missing after pg_log_error().\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 11 Jun 2020 18:03:43 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb"
},
{
"msg_contents": "Thanks for the quick feedback. Here is a new patch.\r\n\r\nNathan",
"msg_date": "Thu, 11 Jun 2020 16:55:48 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb"
},
{
"msg_contents": "On 6/11/20, 10:13 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> Thanks for the quick feedback. Here is a new patch.\r\n\r\nIt looks like I missed a couple of tags in the documentation changes.\r\nThat should be fixed in v3.\r\n\r\nNathan",
"msg_date": "Thu, 18 Jun 2020 21:26:50 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 09:26:50PM +0000, Bossart, Nathan wrote:\n> It looks like I missed a couple of tags in the documentation changes.\n> That should be fixed in v3.\n\nThanks. This flavor looks good to me in terms of code, and the test\ncoverage is what's needed for all the code paths added. This version\nis using my suggestion of upthread for the option names: --no-truncate\nand --no-index-cleanup. Are people fine with this choice?\n--\nMichael",
"msg_date": "Fri, 19 Jun 2020 10:57:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 10:57:01AM +0900, Michael Paquier wrote:\n> Thanks. This flavor looks good to me in terms of code, and the test\n> coverage is what's needed for all the code paths added. This version\n> is using my suggestion of upthread for the option names: --no-truncate\n> and --no-index-cleanup. Are people fine with this choice?\n\nOkay. I have gone through the patch again, and applied it as of\n9550ea3. Thanks.\n--\nMichael",
"msg_date": "Mon, 22 Jun 2020 13:35:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb"
},
{
"msg_contents": "On 6/21/20, 9:36 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Okay. I have gone through the patch again, and applied it as of\r\n> 9550ea3. Thanks.\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 22 Jun 2020 16:24:57 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb"
}
] |
[
{
"msg_contents": "Hi hackers! I added tap test code for pg_dump --extra-float-digits option\nbecause it hadn't tested it. There was no problem when writing test code\nand running TAP tests.",
"msg_date": "Thu, 11 Jun 2020 14:25:37 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add tap test for --extra-float-digits option"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 02:25:37PM +0900, Dong Wook Lee wrote:\n> Hi hackers! I added tap test code for pg_dump --extra-float-digits option\n> because it hadn't tested it. There was no problem when writing test code\n> and running TAP tests.\n\nIf we go down to that (there is a test for --compression), what about\n--rows-per-insert?\n--\nMichael",
"msg_date": "Thu, 11 Jun 2020 16:13:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add tap test for --extra-float-digits option"
},
{
"msg_contents": "Oh, now I understand. and I added a test of --row-per-insert option.\nI'd better find more options missing test\n\n2020년 6월 12일 (금) 오후 4:04, Michael Paquier <michael@paquier.xyz>님이 작성:\n\n> On Fri, Jun 12, 2020 at 02:30:35PM +0900, Dong Wook Lee wrote:\n> > Thank you for your response\n> > Do you mean to move it under the test of --compression option?\n>\n> You could move the test where you see is best, and I would have done\n> that. My point is that we could have a test also for\n> --rows-per-insert as it deals with the same problem.\n> --\n> Michael\n>",
"msg_date": "Fri, 12 Jun 2020 18:15:36 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tap test for --extra-float-digits option"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 06:15:36PM +0900, Dong Wook Lee wrote:\n> Oh, now I understand. and I added a test of --row-per-insert option.\n\nThat's more of an habit to look around, find similar patterns and the\ncheck if these are covered.\n\nI have applied your patch, and you may want to be careful about a\ncouple of things:\n- Please avoid top-posting on the mailing lists:\nhttps://en.wikipedia.org/wiki/Posting_style#Top-posting\nTop-posting breaks the logic of a thread.\n- Your patch format is good. When sending a new version of the patch,\nit may be better to send things as a complete diff on the master\nbranch (or the branch you are working on), instead of just sending one\npatch that applies on top of something you sent previously. Here for\nexample your patch 0002 applied on top of 0001 that was sent at the\ntop of the thread. We have also guidelines about patch submission:\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nThanks!\n--\nMichael",
"msg_date": "Sat, 13 Jun 2020 09:44:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add tap test for --extra-float-digits option"
},
{
"msg_contents": "> That's more of an habit to look around, find similar patterns and the\n> check if these are covered.\n>\n> I have applied your patch, and you may want to be careful about a\n> couple of things:\n> - Please avoid top-posting on the mailing lists:\n> https://en.wikipedia.org/wiki/Posting_style#Top-posting\n> Top-posting breaks the logic of a thread.\n> - Your patch format is good. When sending a new version of the patch,\n> it may be better to send things as a complete diff on the master\n> branch (or the branch you are working on), instead of just sending one\n> patch that applies on top of something you sent previously. Here for\n> example your patch 0002 applied on top of 0001 that was sent at the\n> top of the thread. We have also guidelines about patch submission:\n> https://wiki.postgresql.org/wiki/Submitting_a_Patch\n>\n> Thanks!\n> --\n> Michael\n\nHi Michael\n\nFirst of all, thank you for merging my patch.\nAnd I'm sorry, I should have been more careful about it. Next time I\nwill follow format. And there is something I will tell you\n\nWould you mind if I ask you specify my author info\nwith --author on the git commit?\n\nThe new contributor can get involved in the PostgreSQL project.\nWhen they sent a patch and it was merged to the main repository,\nit'd be better to keep the author info on the git commit, IMHO.\n\nBecause many opensource hackers who interested in\nPostgreSQL project can want to keep a record of author info\non commits they wrote. Otherwise, contribution records can not be found\nby 'git shortlog -sn' and GitHub and OpenHub cannot track their\nopensource contribution records...\n\nSo what about using --author for PostgreSQL contributors\nwhen merging their patches? like the Linux Kernel project\n\nhttps://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8a16c09edc58982d56c49ab577fdcdf830fbc3a5\n\nIf so, many contributors would be highly encouraged.\n\nThanks,\nDong Wook\n\n\n",
"msg_date": "Sat, 13 Jun 2020 18:34:46 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tap test for --extra-float-digits option"
},
{
"msg_contents": "On Sat, Jun 13, 2020 at 06:34:46PM +0900, Dong Wook Lee wrote:\n> First of all, thank you for merging my patch.\n> And I'm sorry, I should have been more careful about it. Next time I\n> will follow format. And there is something I will tell you\n\nWe are all here to learn. It is good to begin with small\ncontributions to get a sense of how the project works, so I think that\nyou are doing well.\n\n> Because many opensource hackers who interested in\n> PostgreSQL project can want to keep a record of author info\n> on commits they wrote. Otherwise, contribution records can not be found\n> by 'git shortlog -sn' and GitHub and OpenHub cannot track their\n> opensource contribution records...\n> \n> So what about using --author for PostgreSQL contributors\n> when merging their patches? like the Linux Kernel project\n\nThat may be something to discuss with the project policy per-se. When\nit comes to credit people, committers list authors, reviewers,\nreporters, etc. directly in the commit log. And your name is\nmentioned in 64725728, I made sure of it. The latest discussions we\nhad about the commit log format involved encouraging as much as\npossible the use of a \"Discussion\" tag in commit logs, the rest\ndepends on each committer, and nobody uses --author.\n--\nMichael",
"msg_date": "Sun, 14 Jun 2020 10:11:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add tap test for --extra-float-digits option"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on [1], I came across a bug.\n\nReproduction steps:\n\ncreate table foo (a int, b int) partition by list (a);\ncreate table foo1 (c int, b int, a int);\nalter table foo1 drop c;\nalter table foo attach partition foo1 for values in (1);\ncreate table foo2 partition of foo for values in (2);\ncreate table foo3 partition of foo for values in (3);\ncreate or replace function trigfunc () returns trigger language\nplpgsql as $$ begin new.b := 2; return new; end; $$;\ncreate trigger trig before insert on foo2 for each row execute\nfunction trigfunc();\ninsert into foo values (1, 1), (2, 2), (3, 3);\nupdate foo set a = 2 from (values (1), (2), (3)) s(x) where a = s.x returning *;\nERROR: attribute 5 of type record has wrong type\nDETAIL: Table has type record, but query expects integer.\n\nThe error occurs when projecting the RETURNING list. The problem is\nthat the projection being used when the error occurs belongs to result\nrelation foo2 which is the destination partition of a row movement\noperation, but it's trying to access a column in the tuple produced by\nthe plan belonging to foo1, the source partition of the row movement.\n foo2's RETURNING projection can only work correctly when it's being\nfed tuples from the plan belonging to foo2.\n\nNote that the targetlists of the plans belonging to different result\nrelations can be different depending on the respective relation's\ntuple descriptors, so are not interchangeable. Also, each result\nrelation's RETURNING list is made to refer to its own plan's output.\nWithout row movement, there is only one result relation to consider,\nso there's no confusion regarding which RETURNING list to compute.\nWith row movement however, while there is only one plan tuple, there\nare two result relations to consider each with its own RETURNING list.\nI think we should be computing the *source* relation's RETURNING list,\nbecause only that one of the two can consume the plan tuple correctly.\nAttached is a patch that fixes things to be that way.\n\nBy the way, the problem exists since PG 11 when UPDATE row movement\nfeature went in.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqH-2sq-3Zq-CtuWjfRSyrGPXJBf1nCKKvTHuGVyfQ1OYA%40mail.gmail.com",
"msg_date": "Thu, 11 Jun 2020 18:10:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "problem with RETURNING and update row movement"
},
{
"msg_contents": "Hi Amit-san,\n\nOn Thu, Jun 11, 2020 at 6:10 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Reproduction steps:\n>\n> create table foo (a int, b int) partition by list (a);\n> create table foo1 (c int, b int, a int);\n> alter table foo1 drop c;\n> alter table foo attach partition foo1 for values in (1);\n> create table foo2 partition of foo for values in (2);\n> create table foo3 partition of foo for values in (3);\n> create or replace function trigfunc () returns trigger language\n> plpgsql as $$ begin new.b := 2; return new; end; $$;\n> create trigger trig before insert on foo2 for each row execute\n> function trigfunc();\n> insert into foo values (1, 1), (2, 2), (3, 3);\n> update foo set a = 2 from (values (1), (2), (3)) s(x) where a = s.x returning *;\n> ERROR: attribute 5 of type record has wrong type\n> DETAIL: Table has type record, but query expects integer.\n\nReproduced. Could you add the patch to the next commitfest so that it\ndoesn't get lost?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sun, 14 Jun 2020 16:23:49 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Sun, Jun 14, 2020 at 4:23 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Hi Amit-san,\n>\n> On Thu, Jun 11, 2020 at 6:10 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Reproduction steps:\n> >\n> > create table foo (a int, b int) partition by list (a);\n> > create table foo1 (c int, b int, a int);\n> > alter table foo1 drop c;\n> > alter table foo attach partition foo1 for values in (1);\n> > create table foo2 partition of foo for values in (2);\n> > create table foo3 partition of foo for values in (3);\n> > create or replace function trigfunc () returns trigger language\n> > plpgsql as $$ begin new.b := 2; return new; end; $$;\n> > create trigger trig before insert on foo2 for each row execute\n> > function trigfunc();\n> > insert into foo values (1, 1), (2, 2), (3, 3);\n> > update foo set a = 2 from (values (1), (2), (3)) s(x) where a = s.x returning *;\n> > ERROR: attribute 5 of type record has wrong type\n> > DETAIL: Table has type record, but query expects integer.\n>\n> Reproduced. Could you add the patch to the next commitfest so that it\n> doesn't get lost?\n\nDone. Thank you for taking a look.\n\nhttps://commitfest.postgresql.org/28/2597/\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 14 Jun 2020 22:48:12 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Amit san\r\n\r\n\r\nHello. I've tested your patch.\r\n\r\nThis patch can be applied comfortably and make check-world has produced no failure.\r\n\r\nI didn't do performance test\r\nbecause this patch doesn't have an effect on it in my opinion.\r\n\r\nI reproduced the bug you described before,\r\nwhich can be prevented by your patch currently.\r\n\r\nAfter applying your patch, I've gotten a correct output without error\r\nusing the test case in the 1st mail of this thread.\r\n\r\nJust small comment about your patch.\r\nI felt the test you added in update.sql could be simpler or shorter in other form.\r\nExcuse me if I say something silly.\r\nIt's because I supposed you can check the bug is prevented without definitions of both a function and its trigger for this case. Neither of them is essentially connected with the row movement between source partition and destination partition and can be replaced by simpler expression ?\r\n\r\nBest,\r\n\tTakamichi Osumi\r\n",
"msg_date": "Tue, 14 Jul 2020 11:26:32 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: problem with RETURNING and update row movement"
},
{
"msg_contents": "Hi Takamichi-san,\n\nOn Tue, Jul 14, 2020 at 8:26 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Amit san\n>\n>\n> Hello. I've tested your patch.\n\nThanks for that.\n\n> Just small comment about your patch.\n> I felt the test you added in update.sql could be simpler or shorter in other form.\n> Excuse me if I say something silly.\n> It's because I supposed you can check the bug is prevented without definitions of both a function and its trigger for this case. Neither of them is essentially connected with the row movement between source partition and destination partition and can be replaced by simpler expression ?\n\nWell, it's true that the function and the trigger have nothing to do\nwith the main bug, but it's often good to be sure that the bug-fix\nisn't breaking cases where they are present and have visible effect.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Jul 2020 18:38:03 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 2:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi,\n>\n> While working on [1], I came across a bug.\n>\n> Reproduction steps:\n>\n> create table foo (a int, b int) partition by list (a);\n> create table foo1 (c int, b int, a int);\n> alter table foo1 drop c;\n> alter table foo attach partition foo1 for values in (1);\n> create table foo2 partition of foo for values in (2);\n> create table foo3 partition of foo for values in (3);\n> create or replace function trigfunc () returns trigger language\n> plpgsql as $$ begin new.b := 2; return new; end; $$;\n> create trigger trig before insert on foo2 for each row execute\n> function trigfunc();\n> insert into foo values (1, 1), (2, 2), (3, 3);\n> update foo set a = 2 from (values (1), (2), (3)) s(x) where a = s.x returning *;\n> ERROR: attribute 5 of type record has wrong type\n> DETAIL: Table has type record, but query expects integer.\n>\n> The error occurs when projecting the RETURNING list. The problem is\n> that the projection being used when the error occurs belongs to result\n> relation foo2 which is the destination partition of a row movement\n> operation, but it's trying to access a column in the tuple produced by\n> the plan belonging to foo1, the source partition of the row movement.\n> foo2's RETURNING projection can only work correctly when it's being\n> fed tuples from the plan belonging to foo2.\n>\n\nIIUC, here the problem is related to below part of code:\nExecInsert(..)\n{\n/* Process RETURNING if present */\nif (resultRelInfo->ri_projectReturning)\nresult = ExecProcessReturning(resultRelInfo, slot, planSlot);\n..\n}\n\nThe reason is that planSlot is for foo1 and slot is for foo2 and when\nit tries to access tuple during ExecProcessReturning(), it results in\nan error. Is my understanding correct? If so, then can't we ensure\nsomeway that planSlot also belongs to foo2 instead of skipping return\nprocessing in Insert and then later do more work to perform in Update.\n\nLike Takamichi-san, I also think here we don't need trigger/function\nin the test case. If one reads the comment you have added in the test\ncase, it is not evident why is trigger or function required. If you\nreally think it is important to cover the trigger case then either\nhave a separate test or at least add some comments on how trigger\nhelps here or what you want to test it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Jul 2020 17:05:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Hi Amit,\n\nThanks for taking a look at this.\n\nOn Mon, Jul 20, 2020 at 8:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> IIUC, here the problem is related to below part of code:\n> ExecInsert(..)\n> {\n> /* Process RETURNING if present */\n> if (resultRelInfo->ri_projectReturning)\n> result = ExecProcessReturning(resultRelInfo, slot, planSlot);\n> ..\n> }\n>\n> The reason is that planSlot is for foo1 and slot is for foo2 and when\n> it tries to access tuple during ExecProcessReturning(), it results in\n> an error. Is my understanding correct?\n\nYes. Specifically, the problem exists if there are any non-target\nrelation attributes in RETURNING which are computed by referring to\nplanSlot, the plan's output tuple, which may be shaped differently\namong result relations due to their tuple descriptors being different.\n\n> If so, then can't we ensure\n> someway that planSlot also belongs to foo2 instead of skipping return\n> processing in Insert and then later do more work to perform in Update.\n\nI did consider that option but failed to see a way to make it work.\n\nI am not sure if there is a way to make a copy of the plan's output\ntuple (planSlot) that is compatible with the destination partition.\nSimple conversion using execute_attr_map_slot() is okay when we know\nthe source and the target slots contain relation tuples, but plan's\noutput tuples will also have other junk attributes. Also, not all\ndestination partitions have an associated plan and hence a slot to\nhold plan tuples.\n\n> Like Takamichi-san, I also think here we don't need trigger/function\n> in the test case. If one reads the comment you have added in the test\n> case, it is not evident why is trigger or function required. If you\n> really think it is important to cover the trigger case then either\n> have a separate test or at least add some comments on how trigger\n> helps here or what you want to test it.\n\nThat's fair. I have updated the test comment.\n\nTo expand on that here, because now we'll be computing RETURNING using\nthe source partition's projection and the tuple in the source\npartition's format, I wanted to make sure that any changes made by the\ndestination partition's triggers are reflected in the output.\n\nPFA v2.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 22 Jul 2020 15:16:02 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Wed, Jul 22, 2020 at 3:16 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Jul 20, 2020 at 8:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > IIUC, here the problem is related to below part of code:\n> > ExecInsert(..)\n> > {\n> > /* Process RETURNING if present */\n> > if (resultRelInfo->ri_projectReturning)\n> > result = ExecProcessReturning(resultRelInfo, slot, planSlot);\n> > ..\n> > }\n> >\n> > The reason is that planSlot is for foo1 and slot is for foo2 and when\n> > it tries to access tuple during ExecProcessReturning(), it results in\n> > an error. Is my understanding correct?\n>\n> Yes. Specifically, the problem exists if there are any non-target\n> relation attributes in RETURNING which are computed by referring to\n> planSlot, the plan's output tuple, which may be shaped differently\n> among result relations due to their tuple descriptors being different.\n>\n> > If so, then can't we ensure\n> > someway that planSlot also belongs to foo2 instead of skipping return\n> > processing in Insert and then later do more work to perform in Update.\n>\n> I did consider that option but failed to see a way to make it work.\n>\n> I am not sure if there is a way to make a copy of the plan's output\n> tuple (planSlot) that is compatible with the destination partition.\n> Simple conversion using execute_attr_map_slot() is okay when we know\n> the source and the target slots contain relation tuples, but plan's\n> output tuples will also have other junk attributes. Also, not all\n> destination partitions have an associated plan and hence a slot to\n> hold plan tuples.\n\nYeah, I think it might be possible to create planSlot to pass to\nExecInsert() so that we can process RETURNING within that function,\nbut even if so, that would be cumbersome not only because partitions\ncan have different rowtypes but because they can have different junk\ncolumns as well, because e.g., subplan foreign partitions may have\ndifferent row ID columns as junk columns. The proposed patch is\nsimple, so I would vote for it. (Note: in case of a foreign\npartition, we call ExecForeignInsert() with the source partition’s\nplanSlot in ExecInsert(), which is not correct, but that would sbe OK\nbecause it seems unlikely that the FDW would look at the planSlot for\nINSERT.)\n\nOne thing I noticed is that the patch changes the existing behavior.\nHere is an example:\n\ncreate table range_parted (a text, b bigint) partition by range (a, b);\ncreate table part_a_1_a_10 partition of range_parted for values from\n('a', 1) to ('a', 10);\ncreate table part_b_1_b_10 partition of range_parted for values from\n('b', 1) to ('b', 10);\ncreate function trigfunc() returns trigger as $$ begin return null;\nend; $$ language plpgsql;\ncreate trigger trig before insert on part_b_1_b_10 for each row\nexecute function trigfunc();\ninsert into range_parted values ('a', 1);\n\nIn HEAD:\n\npostgres=# update range_parted r set a = 'b' from (values ('a', 1))\ns(x, y) where s.x = r.a and s.y = r.b returning tableoid::regclass,\nr.*;\n tableoid | a | b\n----------+---+---\n(0 rows)\n\nUPDATE 0\n\nBut with the patch:\n\npostgres=# update range_parted r set a = 'b' from (values ('a', 1))\ns(x, y) where s.x = r.a and s.y = r.b returning tableoid::regclass,\nr.*;\n tableoid | a | b\n---------------+---+---\n part_a_1_a_10 | b | 1\n(1 row)\n\nUPDATE 1\n\nThis produces RETURNING, though INSERT on the destination partition\nwas skipped by the trigger.\n\nAnother thing is that the patch assumes that the tuple slot to pass to\nExecInsert() would store the inserted tuple when doing that function,\nbut that’s not always true, because in case of a foreign partition,\nthe FDW may return a slot other than the passed-in slot when called\nfrom ExecForeignInsert(), in which case the passed-in slot would not\nstore the inserted tuple anymore.\n\nTo fix these, I modified the patch so that we 1) add to ExecInsert()\nan output parameter slot to store the inserted tuple, and 2) compute\nRETURNING based on the parameter. I also added a regression test\ncase. Attached is an updated version of the patch.\n\nSorry for the long delay.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 30 Jul 2020 17:40:40 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Yet another thing I noticed is that the patch incorrectly produces\nvalues for the tableoid columns specified in the RETURNING list, like\nthis:\n\n+UPDATE range_parted r set a = 'c' FROM (VALUES ('a', 1), ('a', 10),\n('b', 12)) s(x, y) WHERE s.x = r.a AND s.y = r.b RETURNING\ntableoid::regclass, *;\n+ tableoid | a | b | c | d | e | x | y\n+----------------+---+----+-----+---+---------------+---+----\n+ part_a_1_a_10 | c | 1 | 1 | 1 | in trigfunc() | a | 1\n+ part_a_10_a_20 | c | 10 | 200 | 1 | in trigfunc() | a | 10\n+ part_c_1_100 | c | 12 | 96 | 1 | in trigfunc() | b | 12\n+(3 rows)\n\nThe source partitions are shown as tableoid, but the destination\npartition (ie, part_c_1_c_20) should be shown. To fix this, I\nmodified the patch further so that 1) we override tts_tableOid of the\noriginal slot with the OID of the destination partition before calling\nExecProcessReturning() if needed, and 2) in ExecProcessReturning(), we\nonly initialize ecxt_scantuple's tts_tableOid when needed, which would\nsave cycles a bit for non-foreign-table-direct-modification cases.\n\nAttached is a new version of the patch.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Sun, 2 Aug 2020 17:57:41 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Fujita-san,\n\nThanks for your time on this.\n\nOn Sun, Aug 2, 2020 at 5:57 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Yet another thing I noticed is that the patch incorrectly produces\n> values for the tableoid columns specified in the RETURNING list, like\n> this:\n\nYeah, I noticed that too.\n\n> +UPDATE range_parted r set a = 'c' FROM (VALUES ('a', 1), ('a', 10),\n> ('b', 12)) s(x, y) WHERE s.x = r.a AND s.y = r.b RETURNING\n> tableoid::regclass, *;\n> + tableoid | a | b | c | d | e | x | y\n> +----------------+---+----+-----+---+---------------+---+----\n> + part_a_1_a_10 | c | 1 | 1 | 1 | in trigfunc() | a | 1\n> + part_a_10_a_20 | c | 10 | 200 | 1 | in trigfunc() | a | 10\n> + part_c_1_100 | c | 12 | 96 | 1 | in trigfunc() | b | 12\n> +(3 rows)\n>\n> The source partitions are shown as tableoid, but the destination\n> partition (ie, part_c_1_c_20) should be shown. To fix this, I\n> modified the patch further so that 1) we override tts_tableOid of the\n> original slot with the OID of the destination partition before calling\n> ExecProcessReturning() if needed, and 2) in ExecProcessReturning(), we\n> only initialize ecxt_scantuple's tts_tableOid when needed, which would\n> save cycles a bit for non-foreign-table-direct-modification cases.\n>\n> Attached is a new version of the patch.\n\nThanks for the updated patch. I reviewed your changes in v3 too and\nthey looked fine to me.\n\nHowever, I noticed that having system columns like ctid, xmin, etc. in\nthe RETURNING list is now broken and maybe irrepairably due to the\napproach we are taking in the patch. Let me show an example:\n\ndrop table foo;\ncreate table foo (a int, b int) partition by list (a);\ncreate table foo1 (c int, b int, a int);\nalter table foo1 drop c;\nalter table foo attach partition foo1 for values in (1);\ncreate table foo2 partition of foo for values in (2);\ncreate table foo3 partition of foo for values in (3);\ncreate or replace function trigfunc () returns trigger language\nplpgsql as $$ begin new.b := 2; return new; end; $$;\ncreate trigger trig before insert on foo2 for each row execute\nfunction trigfunc();\ninsert into foo values (1, 1), (2, 2), (3, 3);\nupdate foo set a = 2 from (values (1), (2), (3)) s(x) where a = s.x\nreturning tableoid::regclass, ctid, xmin, xmax, *;\n tableoid | ctid | xmin | xmax | a | b | x\n----------+----------------+------+------------+---+---+---\n foo2 | (4294967295,0) | 128 | 4294967295 | 2 | 2 | 1\n foo2 | (0,3) | 782 | 0 | 2 | 2 | 2\n foo2 | (0,4) | 782 | 0 | 2 | 2 | 3\n(3 rows)\n\nDuring foo1's update, it appears that we are losing the system\ninformation in the physical tuple initialized during ExecInsert on\nfoo2 during its conversion back to foo1's reltype using the new code.\n I haven't been able to figure out how to preserve the system\ninformation in HeapTuple contained in the destination slot across the\nconversion. Note we want to use the latter to project the RETURNING\nlist.\n\nBy the way, you'll need two adjustments to even get this example\nworking, one of which is a reported problem that system columns in\nRETURNING list during an operation on partitioned table stopped\nworking in PG 12 [1] for which I've proposed a workaround (attached).\nAnother is that we forgot in our patch to \"materialize\" the virtual\ntuple after conversion, which means slot_getsysattr() can't find it to\naccess system columns like xmin, etc.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/141051591267657%40mail.yandex.ru",
"msg_date": "Mon, 3 Aug 2020 14:54:51 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Mon, Aug 3, 2020 at 2:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> However, I noticed that having system columns like ctid, xmin, etc. in\n> the RETURNING list is now broken and maybe irrepairably due to the\n> approach we are taking in the patch. Let me show an example:\n>\n> drop table foo;\n> create table foo (a int, b int) partition by list (a);\n> create table foo1 (c int, b int, a int);\n> alter table foo1 drop c;\n> alter table foo attach partition foo1 for values in (1);\n> create table foo2 partition of foo for values in (2);\n> create table foo3 partition of foo for values in (3);\n> create or replace function trigfunc () returns trigger language\n> plpgsql as $$ begin new.b := 2; return new; end; $$;\n> create trigger trig before insert on foo2 for each row execute\n> function trigfunc();\n> insert into foo values (1, 1), (2, 2), (3, 3);\n> update foo set a = 2 from (values (1), (2), (3)) s(x) where a = s.x\n> returning tableoid::regclass, ctid, xmin, xmax, *;\n> tableoid | ctid | xmin | xmax | a | b | x\n> ----------+----------------+------+------------+---+---+---\n> foo2 | (4294967295,0) | 128 | 4294967295 | 2 | 2 | 1\n> foo2 | (0,3) | 782 | 0 | 2 | 2 | 2\n> foo2 | (0,4) | 782 | 0 | 2 | 2 | 3\n> (3 rows)\n>\n> During foo1's update, it appears that we are losing the system\n> information in the physical tuple initialized during ExecInsert on\n> foo2 during its conversion back to foo1's reltype using the new code.\n> I haven't been able to figure out how to preserve the system\n> information in HeapTuple contained in the destination slot across the\n> conversion. Note we want to use the latter to project the RETURNING\n> list.\n>\n> By the way, you'll need two adjustments to even get this example\n> working, one of which is a reported problem that system columns in\n> RETURNING list during an operation on partitioned table stopped\n> working in PG 12 [1] for which I've proposed a workaround (attached).\n> Another is that we forgot in our patch to \"materialize\" the virtual\n> tuple after conversion, which means slot_getsysattr() can't find it to\n> access system columns like xmin, etc.\n\nThe only solution I could think of for this so far is this:\n\n+ if (map)\n+ {\n+ orig_slot = execute_attr_map_slot(map,\n+ res_slot,\n+ orig_slot);\n+\n+ /*\n+ * A HACK to install system information into the just\n+ * converted tuple so that RETURNING computes any\n+ * system columns correctly. This would be the same\n+ * information that would be present in the HeapTuple\n+ * version of the tuple in res_slot.\n+ */\n+ tuple = ExecFetchSlotHeapTuple(orig_slot, true,\n+ &should_free);\n+ tuple->t_data->t_infomask &= ~(HEAP_XACT_MASK);\n+ tuple->t_data->t_infomask2 &= ~(HEAP2_XACT_MASK);\n+ tuple->t_data->t_infomask |= HEAP_XMAX_INVALID;\n+ HeapTupleHeaderSetXmin(tuple->t_data,\n+ GetCurrentTransactionId());\n+ HeapTupleHeaderSetCmin(tuple->t_data,\n+ estate->es_output_cid);\n+ HeapTupleHeaderSetXmax(tuple->t_data, 0); /*\nfor cleanliness */\n+ }\n+ /*\n+ * Override tts_tableOid with the OID of the destination\n+ * partition.\n+ */\n+ orig_slot->tts_tableOid =\nRelationGetRelid(destRel->ri_RelationDesc);\n+ /* Also the TID. */\n+ orig_slot->tts_tid = res_slot->tts_tid;\n\n..but it might be too ugly :(.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Aug 2020 16:39:11 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Amit-san,\n\nOn Mon, Aug 3, 2020 at 2:55 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> However, I noticed that having system columns like ctid, xmin, etc. in\n> the RETURNING list is now broken and maybe irrepairably due to the\n> approach we are taking in the patch. Let me show an example:\n>\n> drop table foo;\n> create table foo (a int, b int) partition by list (a);\n> create table foo1 (c int, b int, a int);\n> alter table foo1 drop c;\n> alter table foo attach partition foo1 for values in (1);\n> create table foo2 partition of foo for values in (2);\n> create table foo3 partition of foo for values in (3);\n> create or replace function trigfunc () returns trigger language\n> plpgsql as $$ begin new.b := 2; return new; end; $$;\n> create trigger trig before insert on foo2 for each row execute\n> function trigfunc();\n> insert into foo values (1, 1), (2, 2), (3, 3);\n> update foo set a = 2 from (values (1), (2), (3)) s(x) where a = s.x\n> returning tableoid::regclass, ctid, xmin, xmax, *;\n> tableoid | ctid | xmin | xmax | a | b | x\n> ----------+----------------+------+------------+---+---+---\n> foo2 | (4294967295,0) | 128 | 4294967295 | 2 | 2 | 1\n> foo2 | (0,3) | 782 | 0 | 2 | 2 | 2\n> foo2 | (0,4) | 782 | 0 | 2 | 2 | 3\n> (3 rows)\n>\n> During foo1's update, it appears that we are losing the system\n> information in the physical tuple initialized during ExecInsert on\n> foo2 during its conversion back to foo1's reltype using the new code.\n> I haven't been able to figure out how to preserve the system\n> information in HeapTuple contained in the destination slot across the\n> conversion. Note we want to use the latter to project the RETURNING\n> list.\n\nThanks for pointing that out!\n\n> By the way, you'll need two adjustments to even get this example\n> working, one of which is a reported problem that system columns in\n> RETURNING list during an operation on partitioned table stopped\n> working in PG 12 [1] for which I've proposed a workaround (attached).\n> Another is that we forgot in our patch to \"materialize\" the virtual\n> tuple after conversion, which means slot_getsysattr() can't find it to\n> access system columns like xmin, etc.\n\nOk, I’ll look at those closely.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 4 Aug 2020 21:25:10 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Mon, Aug 3, 2020 at 4:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Aug 3, 2020 at 2:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > By the way, you'll need two adjustments to even get this example\n> > working, one of which is a reported problem that system columns in\n> > RETURNING list during an operation on partitioned table stopped\n> > working in PG 12 [1] for which I've proposed a workaround (attached).\n> > Another is that we forgot in our patch to \"materialize\" the virtual\n> > tuple after conversion, which means slot_getsysattr() can't find it to\n> > access system columns like xmin, etc.\n>\n> The only solution I could think of for this so far is this:\n>\n> + if (map)\n> + {\n> + orig_slot = execute_attr_map_slot(map,\n> + res_slot,\n> + orig_slot);\n> +\n> + /*\n> + * A HACK to install system information into the just\n> + * converted tuple so that RETURNING computes any\n> + * system columns correctly. This would be the same\n> + * information that would be present in the HeapTuple\n> + * version of the tuple in res_slot.\n> + */\n> + tuple = ExecFetchSlotHeapTuple(orig_slot, true,\n> + &should_free);\n> + tuple->t_data->t_infomask &= ~(HEAP_XACT_MASK);\n> + tuple->t_data->t_infomask2 &= ~(HEAP2_XACT_MASK);\n> + tuple->t_data->t_infomask |= HEAP_XMAX_INVALID;\n> + HeapTupleHeaderSetXmin(tuple->t_data,\n> + GetCurrentTransactionId());\n> + HeapTupleHeaderSetCmin(tuple->t_data,\n> + estate->es_output_cid);\n> + HeapTupleHeaderSetXmax(tuple->t_data, 0); /*\n> for cleanliness */\n> + }\n> + /*\n> + * Override tts_tableOid with the OID of the destination\n> + * partition.\n> + */\n> + orig_slot->tts_tableOid =\n> RelationGetRelid(destRel->ri_RelationDesc);\n> + /* Also the TID. */\n> + orig_slot->tts_tid = res_slot->tts_tid;\n>\n> ..but it might be too ugly :(.\n\nYeah, I think that would be a bit ugly, and actually, is not correct\nin case of postgres_fdw foreign table, in which case Xmin and Cmin are\nalso set to 0 [2]. I think we should probably first address the\ntableam issue that you pointed out, but I don't think I'm the right\nperson to do so.\n\nBest regards,\nEtsuro Fujita\n\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=da7d44b627ba839de32c9409aca659f60324de76\n\n\n",
"msg_date": "Fri, 7 Aug 2020 22:45:51 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Fri, Aug 7, 2020 at 10:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Aug 3, 2020 at 4:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Mon, Aug 3, 2020 at 2:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > By the way, you'll need two adjustments to even get this example\n> > > working, one of which is a reported problem that system columns in\n> > > RETURNING list during an operation on partitioned table stopped\n> > > working in PG 12 [1] for which I've proposed a workaround (attached).\n> > > Another is that we forgot in our patch to \"materialize\" the virtual\n> > > tuple after conversion, which means slot_getsysattr() can't find it to\n> > > access system columns like xmin, etc.\n> >\n> > The only solution I could think of for this so far is this:\n> >\n> > + if (map)\n> > + {\n> > + orig_slot = execute_attr_map_slot(map,\n> > + res_slot,\n> > + orig_slot);\n> > +\n> > + /*\n> > + * A HACK to install system information into the just\n> > + * converted tuple so that RETURNING computes any\n> > + * system columns correctly. This would be the same\n> > + * information that would be present in the HeapTuple\n> > + * version of the tuple in res_slot.\n> > + */\n> > + tuple = ExecFetchSlotHeapTuple(orig_slot, true,\n> > + &should_free);\n> > + tuple->t_data->t_infomask &= ~(HEAP_XACT_MASK);\n> > + tuple->t_data->t_infomask2 &= ~(HEAP2_XACT_MASK);\n> > + tuple->t_data->t_infomask |= HEAP_XMAX_INVALID;\n> > + HeapTupleHeaderSetXmin(tuple->t_data,\n> > + GetCurrentTransactionId());\n> > + HeapTupleHeaderSetCmin(tuple->t_data,\n> > + estate->es_output_cid);\n> > + HeapTupleHeaderSetXmax(tuple->t_data, 0); /*\n> > for cleanliness */\n> > + }\n> > + /*\n> > + * Override tts_tableOid with the OID of the destination\n> > + * partition.\n> > + */\n> > + orig_slot->tts_tableOid =\n> > RelationGetRelid(destRel->ri_RelationDesc);\n> > + /* Also the TID. */\n> > + orig_slot->tts_tid = res_slot->tts_tid;\n> >\n> > ..but it might be too ugly :(.\n>\n> Yeah, I think that would be a bit ugly, and actually, is not correct\n> in case of postgres_fdw foreign table, in which case Xmin and Cmin are\n> also set to 0 [2].\n\nYeah, need to think a bit harder.\n\n> I think we should probably first address the\n> tableam issue that you pointed out, but I don't think I'm the right\n> person to do so.\n\nOkay, I will try to revive that thread.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Aug 2020 20:43:44 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "I noticed that this bugfix has stalled, probably because the other\nbugfix has also stalled.\n\nIt seems that cleanly returning system columns from table AM is not\ngoing to be a simple task -- whatever fix we get for that is likely not\ngoing to make it all the way to PG 12. So I suggest that we should fix\nthis bug in 11-13 without depending on that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 11 Sep 2020 17:42:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Sat, Sep 12, 2020 at 5:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> I noticed that this bugfix has stalled, probably because the other\n> bugfix has also stalled.\n>\n> It seems that cleanly returning system columns from table AM is not\n> going to be a simple task -- whatever fix we get for that is likely not\n> going to make it all the way to PG 12. So I suggest that we should fix\n> this bug in 11-13 without depending on that.\n\nAlthough I would be reversing course on what I said upthread, I tend\nto agree with that, because the core idea behind the fix for this\nissue does not seem likely to be invalidated by any conclusion\nregarding the other issue. That is, as far as the issue here is\nconcerned, we can't avoid falling back to using the source partition's\nRETURNING projection whose scan tuple is provided using the source\npartition's tuple slot.\n\nHowever, given that we have to pass the *new* tuple to the projection,\nnot the old one, we will need a \"clean\" way to transfer its (the new\ntuple's) system columns into the source partition's tuple slot. The\nsketch I gave upthread of what that could look like was not liked by\nFujita-san much.\n\n\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Sep 2020 15:53:24 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Mon, Sep 14, 2020 at 3:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sat, Sep 12, 2020 at 5:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > I noticed that this bugfix has stalled, probably because the other\n> > bugfix has also stalled.\n> >\n> > It seems that cleanly returning system columns from table AM is not\n> > going to be a simple task -- whatever fix we get for that is likely not\n> > going to make it all the way to PG 12. So I suggest that we should fix\n> > this bug in 11-13 without depending on that.\n>\n> Although I would be reversing course on what I said upthread, I tend\n> to agree with that, because the core idea behind the fix for this\n> issue does not seem likely to be invalidated by any conclusion\n> regarding the other issue. That is, as far as the issue here is\n> concerned, we can't avoid falling back to using the source partition's\n> RETURNING projection whose scan tuple is provided using the source\n> partition's tuple slot.\n\nI agree on that point.\n\n> However, given that we have to pass the *new* tuple to the projection,\n> not the old one, we will need a \"clean\" way to transfer its (the new\n> tuple's) system columns into the source partition's tuple slot. The\n> sketch I gave upthread of what that could look like was not liked by\n> Fujita-san much.\n\nIIUC, I think two issues are discussed in the thread [1]: (a) there is\ncurrently no way to define the set of meaningful system columns for a\npartitioned table that contains pluggable storages other than standard\nheap, and (b) even in the case where the set is defined as before,\nlike partitioned tables that don’t contain any pluggable storages,\nsystem column values are not obtained from a tuple inserted into a\npartitioned table in cases as reported in [1], because virtual slots\nare assigned for partitioned tables [2][3]. (I think the latter is\nthe original issue in the thread, though.)\n\nI think we could fix this update-tuple-routing-vs-RETURNING issue\nwithout the fix for (a). But to transfer system column values in a\ncleaner way, I think we need to fix (b) first so that we can always\nobtain/copy them from the new tuple moved to another partition by\nINSERT following DELETE.\n\n(I think we could address this issue for v11 independently of [1], off course.)\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/141051591267657%40mail.yandex.ru\n[2] https://www.postgresql.org/message-id/CA%2BHiwqHrsNa4e0MfpSzv7xOM94TvX%3DR0MskYxYWfy0kjL0DAdQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/20200811180231.fcssvhelqpnydnx7%40alap3.anarazel.de\n\n\n",
"msg_date": "Mon, 14 Sep 2020 17:56:13 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Mon, Sep 14, 2020 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Sep 14, 2020 at 3:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, Sep 12, 2020 at 5:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > I noticed that this bugfix has stalled, probably because the other\n> > > bugfix has also stalled.\n> > >\n> > > It seems that cleanly returning system columns from table AM is not\n> > > going to be a simple task -- whatever fix we get for that is likely not\n> > > going to make it all the way to PG 12. So I suggest that we should fix\n> > > this bug in 11-13 without depending on that.\n> >\n> > Although I would be reversing course on what I said upthread, I tend\n> > to agree with that, because the core idea behind the fix for this\n> > issue does not seem likely to be invalidated by any conclusion\n> > regarding the other issue. That is, as far as the issue here is\n> > concerned, we can't avoid falling back to using the source partition's\n> > RETURNING projection whose scan tuple is provided using the source\n> > partition's tuple slot.\n>\n> I agree on that point.\n>\n> > However, given that we have to pass the *new* tuple to the projection,\n> > not the old one, we will need a \"clean\" way to transfer its (the new\n> > tuple's) system columns into the source partition's tuple slot. The\n> > sketch I gave upthread of what that could look like was not liked by\n> > Fujita-san much.\n>\n> IIUC, I think two issues are discussed in the thread [1]: (a) there is\n> currently no way to define the set of meaningful system columns for a\n> partitioned table that contains pluggable storages other than standard\n> heap, and (b) even in the case where the set is defined as before,\n> like partitioned tables that don’t contain any pluggable storages,\n> system column values are not obtained from a tuple inserted into a\n> partitioned table in cases as reported in [1], because virtual slots\n> are assigned for partitioned tables [2][3]. (I think the latter is\n> the original issue in the thread, though.)\n\nRight, (b) can be solved by using a leaf partition's tuple slot as\nproposed. Although (a) needs a long term fix.\n\n> I think we could fix this update-tuple-routing-vs-RETURNING issue\n> without the fix for (a). But to transfer system column values in a\n> cleaner way, I think we need to fix (b) first so that we can always\n> obtain/copy them from the new tuple moved to another partition by\n> INSERT following DELETE.\n\nYes, I think you are right. Although, I am still not sure how to\n\"copy\" system columns from one partition's slot to another's, that is,\nwithout assuming what they are.\n\n> (I think we could address this issue for v11 independently of [1], off course.)\n\nYeah.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Sep 2020 22:45:17 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Mon, Sep 14, 2020 at 10:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Sep 14, 2020 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > IIUC, I think two issues are discussed in the thread [1]: (a) there is\n> > currently no way to define the set of meaningful system columns for a\n> > partitioned table that contains pluggable storages other than standard\n> > heap, and (b) even in the case where the set is defined as before,\n> > like partitioned tables that don’t contain any pluggable storages,\n> > system column values are not obtained from a tuple inserted into a\n> > partitioned table in cases as reported in [1], because virtual slots\n> > are assigned for partitioned tables [2][3]. (I think the latter is\n> > the original issue in the thread, though.)\n>\n> Right, (b) can be solved by using a leaf partition's tuple slot as\n> proposed.\n\nYou mean what is proposed in [3]?\n\n> > I think we could fix this update-tuple-routing-vs-RETURNING issue\n> > without the fix for (a). But to transfer system column values in a\n> > cleaner way, I think we need to fix (b) first so that we can always\n> > obtain/copy them from the new tuple moved to another partition by\n> > INSERT following DELETE.\n>\n> Yes, I think you are right. Although, I am still not sure how to\n> \"copy\" system columns from one partition's slot to another's, that is,\n> without assuming what they are.\n\nI just thought we assume that partitions support all the existing\nsystem attributes until we have the fix for (a), i.e., the slot\nassigned for a partition must have the getsysattr callback routine\nfrom which we can fetch each existing system attribute of a underlying\ntuple in the slot, regardless of whether that system attribute is used\nfor the partition or not.\n\n> > (I think we could address this issue for v11 independently of [1], off course.)\n>\n> Yeah.\n\nI noticed that I modified your patch incorrectly. Sorry for that.\nAttached is an updated patch fixing that. I also added a bit of code\nto copy CTID from the new tuple moved to another partition, not just\ntable OID, and did some code/comment adjustments, mainly to match\nother places. I also created a patch for PG11, which I am attaching\nas well.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Sun, 20 Sep 2020 23:40:52 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Fujita-san,\n\nOn Sun, Sep 20, 2020 at 11:41 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Sep 14, 2020 at 10:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Mon, Sep 14, 2020 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > IIUC, I think two issues are discussed in the thread [1]: (a) there is\n> > > currently no way to define the set of meaningful system columns for a\n> > > partitioned table that contains pluggable storages other than standard\n> > > heap, and (b) even in the case where the set is defined as before,\n> > > like partitioned tables that don’t contain any pluggable storages,\n> > > system column values are not obtained from a tuple inserted into a\n> > > partitioned table in cases as reported in [1], because virtual slots\n> > > are assigned for partitioned tables [2][3]. (I think the latter is\n> > > the original issue in the thread, though.)\n> >\n> > Right, (b) can be solved by using a leaf partition's tuple slot as\n> > proposed.\n>\n> You mean what is proposed in [3]?\n\nYes. Although, I am for assigning a dedicated slot to partitions\n*unconditionally*, whereas the PoC patch Andres shared makes it\nconditional on either needing tuple conversion between the root and\nthe partition or having a RETURNING projection present in the query.\n\n> > > I think we could fix this update-tuple-routing-vs-RETURNING issue\n> > > without the fix for (a). But to transfer system column values in a\n> > > cleaner way, I think we need to fix (b) first so that we can always\n> > > obtain/copy them from the new tuple moved to another partition by\n> > > INSERT following DELETE.\n> >\n> > Yes, I think you are right. Although, I am still not sure how to\n> > \"copy\" system columns from one partition's slot to another's, that is,\n> > without assuming what they are.\n>\n> I just thought we assume that partitions support all the existing\n> system attributes until we have the fix for (a), i.e., the slot\n> assigned for a partition must have the getsysattr callback routine\n> from which we can fetch each existing system attribute of a underlying\n> tuple in the slot, regardless of whether that system attribute is used\n> for the partition or not.\n\nHmm, to copy one slot's system attributes into another, we will also\nneed a way to \"set\" the system attributes in the destination slot.\nBut maybe I didn't fully understand what you said.\n\n> > > (I think we could address this issue for v11 independently of [1], off course.)\n> >\n> > Yeah.\n>\n> I noticed that I modified your patch incorrectly. Sorry for that.\n> Attached is an updated patch fixing that.\n\nAh, no problem. Thanks for updating.\n\n> I also added a bit of code\n> to copy CTID from the new tuple moved to another partition, not just\n> table OID, and did some code/comment adjustments, mainly to match\n> other places. I also created a patch for PG11, which I am attaching\n> as well.\n\nIn the patch for PG 11:\n\n+ new_tuple->t_self = new_tuple->t_data->t_ctid =\n+ old_tuple->t_self;\n...\n\nShould we add a one-line comment above this block of code to transfer\nsystem attributes? Maybe: /* Also transfer the system attributes. */?\n\nBTW, do you think we should alter the test in PG 11 branch to test\nthat system attributes are returned correctly? Once we settle the\nother issue, we can adjust the HEAD's test to do likewise?\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 23 Sep 2020 22:12:46 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Wed, Sep 23, 2020 at 10:12 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sun, Sep 20, 2020 at 11:41 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Mon, Sep 14, 2020 at 10:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Mon, Sep 14, 2020 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > > IIUC, I think two issues are discussed in the thread [1]: (a) there is\n> > > > currently no way to define the set of meaningful system columns for a\n> > > > partitioned table that contains pluggable storages other than standard\n> > > > heap, and (b) even in the case where the set is defined as before,\n> > > > like partitioned tables that don’t contain any pluggable storages,\n> > > > system column values are not obtained from a tuple inserted into a\n> > > > partitioned table in cases as reported in [1], because virtual slots\n> > > > are assigned for partitioned tables [2][3]. (I think the latter is\n> > > > the original issue in the thread, though.)\n\n> > > > I think we could fix this update-tuple-routing-vs-RETURNING issue\n> > > > without the fix for (a). But to transfer system column values in a\n> > > > cleaner way, I think we need to fix (b) first so that we can always\n> > > > obtain/copy them from the new tuple moved to another partition by\n> > > > INSERT following DELETE.\n> > >\n> > > Yes, I think you are right. Although, I am still not sure how to\n> > > \"copy\" system columns from one partition's slot to another's, that is,\n> > > without assuming what they are.\n> >\n> > I just thought we assume that partitions support all the existing\n> > system attributes until we have the fix for (a), i.e., the slot\n> > assigned for a partition must have the getsysattr callback routine\n> > from which we can fetch each existing system attribute of a underlying\n> > tuple in the slot, regardless of whether that system attribute is used\n> > for the partition or not.\n>\n> Hmm, to copy one slot's system attributes into another, we will also\n> need a way to \"set\" the system attributes in the destination slot.\n> But maybe I didn't fully understand what you said.\n\nSorry, my thought was vague. To store xmin/xmax/cmin/cmax into a\ngiven slot, we need to extend the TupleTableSlot struct to contain\nthese attributes as well? Or we need to introduce a new callback\nroutine for that (say, setsysattr)? These would not be\nback-patchable, though.\n\n> > I also created a patch for PG11, which I am attaching\n> > as well.\n>\n> In the patch for PG 11:\n>\n> + new_tuple->t_self = new_tuple->t_data->t_ctid =\n> + old_tuple->t_self;\n> ...\n>\n> Should we add a one-line comment above this block of code to transfer\n> system attributes? Maybe: /* Also transfer the system attributes. */?\n\nWill add that comment. Thanks for reviewing!\n\n> BTW, do you think we should alter the test in PG 11 branch to test\n> that system attributes are returned correctly?\n\nYeah, I think so. I didn’t come up with any good test cases for that, though.\n\n> Once we settle the\n> other issue, we can adjust the HEAD's test to do likewise?\n\nYeah, but for the other issue, I started thinking that we should just\nforbid referencing xmin/xmax/cmin/cmax in 12, 13, and HEAD...\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 24 Sep 2020 04:25:24 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 04:25:24AM +0900, Etsuro Fujita wrote:\n> Sorry, my thought was vague. To store xmin/xmax/cmin/cmax into a\n> given slot, we need to extend the TupleTableSlot struct to contain\n> these attributes as well? Or we need to introduce a new callback\n> routine for that (say, setsysattr)? These would not be\n> back-patchable, though.\n\nPlease note that the latest patch fails to apply, so this needs a\nrebase.\n--\nMichael",
"msg_date": "Thu, 24 Sep 2020 13:54:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 1:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 24, 2020 at 04:25:24AM +0900, Etsuro Fujita wrote:\n> > Sorry, my thought was vague. To store xmin/xmax/cmin/cmax into a\n> > given slot, we need to extend the TupleTableSlot struct to contain\n> > these attributes as well? Or we need to introduce a new callback\n> > routine for that (say, setsysattr)? These would not be\n> > back-patchable, though.\n>\n> Please note that the latest patch fails to apply, so this needs a\n> rebase.\n\nI saw the CF-bot failure too yesterday, although it seems that it's\nbecause the bot is trying to apply the patch version meant for v11\nbranch onto HEAD branch. I just checked that the patches apply\ncleanly to their respective branches.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 24 Sep 2020 14:26:04 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 4:25 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Sep 23, 2020 at 10:12 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sun, Sep 20, 2020 at 11:41 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > On Mon, Sep 14, 2020 at 10:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > Although, I am still not sure how to\n> > > > \"copy\" system columns from one partition's slot to another's, that is,\n> > > > without assuming what they are.\n> > >\n> > > I just thought we assume that partitions support all the existing\n> > > system attributes until we have the fix for (a), i.e., the slot\n> > > assigned for a partition must have the getsysattr callback routine\n> > > from which we can fetch each existing system attribute of a underlying\n> > > tuple in the slot, regardless of whether that system attribute is used\n> > > for the partition or not.\n> >\n> > Hmm, to copy one slot's system attributes into another, we will also\n> > need a way to \"set\" the system attributes in the destination slot.\n> > But maybe I didn't fully understand what you said.\n>\n> Sorry, my thought was vague. To store xmin/xmax/cmin/cmax into a\n> given slot, we need to extend the TupleTableSlot struct to contain\n> these attributes as well? Or we need to introduce a new callback\n> routine for that (say, setsysattr)? These would not be\n> back-patchable, though.\n\nYeah, I'd think so too.\n\nBTW, the discussion so far on the other thread is oblivious to the\nissue being discussed here, where we need to find a way to transfer\nsystem attributes between a pair of partitions that are possibly\nincompatible with each other in terms of what set of system attributes\nthey support. Although, if we prevent accessing system attributes\nwhen performing the operation on partitioned tables, like what you\nseem to propose below, then we wouldn't really have that problem.\n\n> > BTW, do you think we should alter the test in PG 11 branch to test\n> > that system attributes are returned correctly?\n>\n> Yeah, I think so. I didn’t come up with any good test cases for that, though.\n>\n> > Once we settle the\n> > other issue, we can adjust the HEAD's test to do likewise?\n>\n> Yeah, but for the other issue, I started thinking that we should just\n> forbid referencing xmin/xmax/cmin/cmax in 12, 13, and HEAD...\n\nWhen the command is being performed on a partitioned table you mean?\nThat is, it'd be okay to reference them when the command is being\nperformed directly on a leaf partition, although it's another thing\nwhether the leaf partitions themselves have sensible values to provide\nfor them.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 24 Sep 2020 14:47:20 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 2:47 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Sep 24, 2020 at 4:25 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> BTW, the discussion so far on the other thread is oblivious to the\n> issue being discussed here, where we need to find a way to transfer\n> system attributes between a pair of partitions that are possibly\n> incompatible with each other in terms of what set of system attributes\n> they support.\n\nYeah, we should discuss the two issues together.\n\n> Although, if we prevent accessing system attributes\n> when performing the operation on partitioned tables, like what you\n> seem to propose below, then we wouldn't really have that problem.\n\nYeah, I think so.\n\n> > Yeah, but for the other issue, I started thinking that we should just\n> > forbid referencing xmin/xmax/cmin/cmax in 12, 13, and HEAD...\n>\n> When the command is being performed on a partitioned table you mean?\n\nYes. One concern about that is triggers: IIUC, triggers on a\npartition as-is can or can not reference xmin/xmax/cmin/cmax depending\non whether a dedicated tuple slot for the partition is used or not.\nWe should do something about this if we go in that direction?\n\n> That is, it'd be okay to reference them when the command is being\n> performed directly on a leaf partition, although it's another thing\n> whether the leaf partitions themselves have sensible values to provide\n> for them.\n\nI think so too.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 24 Sep 2020 19:30:42 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 7:30 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Sep 24, 2020 at 2:47 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Sep 24, 2020 at 4:25 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > Yeah, but for the other issue, I started thinking that we should just\n> > > forbid referencing xmin/xmax/cmin/cmax in 12, 13, and HEAD...\n> >\n> > When the command is being performed on a partitioned table you mean?\n>\n> Yes. One concern about that is triggers: IIUC, triggers on a\n> partition as-is can or can not reference xmin/xmax/cmin/cmax depending\n> on whether a dedicated tuple slot for the partition is used or not.\n> We should do something about this if we go in that direction?\n\nMaybe I'm missing something, but assuming that we're talking about\nprohibiting system attribute access in the RETURNING clause, how does\nthat affect what triggers can or cannot do? AFAICS, only AFTER\nrow-level triggers may sensibly access system attributes and\nwhether/how they can do so has not much to do with the slot that\nExecInsert() gets the new tuple in. It seems that the AFTER trigger\ninfrastructure remembers an affected tuple's ctid and fetches it just\nbefore calling trigger function by asking the result relation's (e.g.,\na partition's) access method.\n\nTo illustrate, with HEAD:\n\ncreate table foo (a int, b int) partition by range (a);\ncreate table foo1 partition of foo for values from (1) to (2);\n\ncreate or replace function report_system_info () returns trigger\nlanguage plpgsql as $$\nbegin\n raise notice 'ctid: %', new.ctid;\n raise notice 'xmin: %', new.xmin;\n raise notice 'xmax: %', new.xmax;\n raise notice 'cmin: %', new.cmin;\n raise notice 'cmax: %', new.cmax;\n raise notice 'tableoid: %', new.tableoid;\n return NULL;\nend; $$;\n\ncreate trigger foo_after_trig after insert on foo for each row execute\nfunction report_system_info();\n\nbegin;\n\ninsert into foo values (1);\nNOTICE: ctid: (0,1)\nNOTICE: xmin: 532\nNOTICE: xmax: 0\nNOTICE: cmin: 0\nNOTICE: cmax: 0\nNOTICE: tableoid: 16387\n\ninsert into foo values (1);\nNOTICE: ctid: (0,2)\nNOTICE: xmin: 532\nNOTICE: xmax: 0\nNOTICE: cmin: 1\nNOTICE: cmax: 1\nNOTICE: tableoid: 16387\n\ninsert into foo values (1);\nNOTICE: ctid: (0,3)\nNOTICE: xmin: 532\nNOTICE: xmax: 0\nNOTICE: cmin: 2\nNOTICE: cmax: 2\nNOTICE: tableoid: 16387\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 24 Sep 2020 22:52:53 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Sep 24, 2020 at 1:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Thu, Sep 24, 2020 at 04:25:24AM +0900, Etsuro Fujita wrote:\n> > > Sorry, my thought was vague. To store xmin/xmax/cmin/cmax into a\n> > > given slot, we need to extend the TupleTableSlot struct to contain\n> > > these attributes as well? Or we need to introduce a new callback\n> > > routine for that (say, setsysattr)? These would not be\n> > > back-patchable, though.\n> >\n> > Please note that the latest patch fails to apply, so this needs a\n> > rebase.\n>\n> I saw the CF-bot failure too yesterday, although it seems that it's\n> because the bot is trying to apply the patch version meant for v11\n> branch onto HEAD branch. I just checked that the patches apply\n> cleanly to their respective branches.\n\nI checked that the last statement is still true, so I've switched the\nstatus back to Needs Review.\n\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Sep 2020 12:34:17 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Wed, Sep 30, 2020 at 12:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Sep 24, 2020 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I saw the CF-bot failure too yesterday, although it seems that it's\n> > because the bot is trying to apply the patch version meant for v11\n> > branch onto HEAD branch. I just checked that the patches apply\n> > cleanly to their respective branches.\n>\n> I checked that the last statement is still true, so I've switched the\n> status back to Needs Review.\n\nAnd the patch for HEAD no longer applies, because of a recent\nrefactoring commit that hit update row movement code.\n\nRebased patch for HEAD attached. Patch for PG11 should apply as-is.\n\nHere is a summary of where we stand on this, because another issue\nrelated to using RETURNING with partitioned tables [1] kind of ties\ninto this.\n\nThe problem reported on this thread is solved by ExecUpdate() itself\nevaluating the RETURNING list using the source partition's\nri_projectReturning, instead of ExecInsert() doing it using the\ndestination partition's ri_projectReturning. It must work that way,\nbecause the tuple descriptor of 'planSlot', which provides the values\nof the columns mentioned in RETURNING of tables other than the target\ntable, is based on the source partition. Note that the tuple inserted\ninto the destination partition needs to be converted into the source\npartition if their tuple descriptors differ, so that RETURNING\ncorrectly returns the new values of the updated tuple.\n\nThe point we are stuck on is whether this fix will create problems for\nthe case when the RETURNING list contains system columns. Now that we\nwill be projecting the tuple inserted into the destination using its\ncopy in the source partition's format and we have no way of preserving\nthe system information in the tuple across conversions, we need to\nfind a way to make RETURNING project the correct system information.\nFujita-san has suggested (or agreed with a suggestion made at [1])\nthat we should simply prevent system information from being projected\nwith RETURNING when the command is being performed on a partitioned\ntable, in which case this becomes a non-issue. If that is not what we\ndecide to do on the other thread, then at least we will have to figure\nout over there how we will support RETURNING system columns when row\nmovement occurs during an update on partitioned tables. The question\nI guess is whether that thread must conclude before the fix here can\nbe committed.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/flat/141051591267657%40mail.yandex.ru",
"msg_date": "Fri, 30 Oct 2020 19:00:42 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On 10/30/20 6:00 AM, Amit Langote wrote:\n> The question\n> I guess is whether that thread must conclude before the fix here can\n> be committed.\n\nIndeed. But it seems like there is a candidate patch on [1] though that \nthread has also stalled. I'll try to get some status on that thread but \nthe question remains if this patch will be stalled until [1] is resolved.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/flat/141051591267657%40mail.yandex.ru\n\n\n",
"msg_date": "Fri, 5 Mar 2021 10:52:31 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Sat, Mar 6, 2021 at 12:52 AM David Steele <david@pgmasters.net> wrote:\n> On 10/30/20 6:00 AM, Amit Langote wrote:\n> > The question\n> > I guess is whether that thread must conclude before the fix here can\n> > be committed.\n>\n> Indeed. But it seems like there is a candidate patch on [1] though that\n> thread has also stalled. I'll try to get some status on that thread but\n> the question remains if this patch will be stalled until [1] is resolved.\n\nSorry for the delay in following up on this thread. I've posted new\npatches on the other.\n\nFWIW, I think we should go ahead and apply the patches for the bug\nreported here. Anyone who tries to project an updated tuple's system\ncolumns using RETURNING are likely to face problems one way or\nanother, especially if they have partitioned tables containing\npartitions of varying table AMs, but at least they won't face the bug\ndiscussed here.\n\nI've attached patches for all affected branches.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 22 Mar 2021 17:33:33 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> FWIW, I think we should go ahead and apply the patches for the bug\n> reported here. Anyone who tries to project an updated tuple's system\n> columns using RETURNING are likely to face problems one way or\n> another, especially if they have partitioned tables containing\n> partitions of varying table AMs, but at least they won't face the bug\n> discussed here.\n\nAgreed, we should get this fixed in time for the next minor releases.\nThe issue no longer exists on HEAD, thanks to 86dc90056 having got\nrid of per-target-relation variance in the contents of planSlot.\nBut we still need a fix for the back branches.\n\nSo I looked over the v13 patch, and found a couple of things\nI didn't like:\n\n* I think what you did in ExecProcessReturning is buggy. It's\nnot a great idea to have completely different processes for\ngetting tableoid set in normal-relation vs foreign-relation\ncases, and in this case the foreign-relation case was simply\nwrong. Maybe the bug isn't reachable for lack of support of\ncross-partition motion with FDWs, but I'm not sure about that.\nWe really need to decouple the RETURNING expressions (which\nwill belong to the source relation) from the value injected\nfor tableoid (which will belong to the destination).\n\n* I really disliked the API change that ExecInsert is responsible\nfor computing RETURNING except when it isn't. That's confusing\nand there's no good reason for it, since it's not really any\neasier to deal with the case at the call site than inside ExecInsert.\n\nIn the attached revision I made ExecInsert handle RETURNING\ncalculations by asking the callers to pass in the ResultRelInfo\nthat should be used for the purpose. We could alternatively\nhave taken the responsibility for RETURNING out of ExecInsert\naltogether, making the callers call ExecProcessReturning.\nI think that might have netted out slightly simpler than this.\nBut we're unlikely to apply such a change in HEAD, so it seemed\nbetter to keep the division of responsibilities the same as it\nis in other branches.\n\nThoughts? (I've not looked at porting this to v12 or v11 yet.)\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 21 Apr 2021 20:37:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 9:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > FWIW, I think we should go ahead and apply the patches for the bug\n> > reported here. Anyone who tries to project an updated tuple's system\n> > columns using RETURNING are likely to face problems one way or\n> > another, especially if they have partitioned tables containing\n> > partitions of varying table AMs, but at least they won't face the bug\n> > discussed here.\n>\n> Agreed, we should get this fixed in time for the next minor releases.\n\nThanks for taking a look at this.\n\n> The issue no longer exists on HEAD, thanks to 86dc90056 having got\n> rid of per-target-relation variance in the contents of planSlot.\n> But we still need a fix for the back branches.\n>\n> So I looked over the v13 patch, and found a couple of things\n> I didn't like:\n>\n> * I think what you did in ExecProcessReturning is buggy. It's\n> not a great idea to have completely different processes for\n> getting tableoid set in normal-relation vs foreign-relation\n> cases, and in this case the foreign-relation case was simply\n> wrong. Maybe the bug isn't reachable for lack of support of\n> cross-partition motion with FDWs, but I'm not sure about that.\n\nThe two cases I see are some callers passing a valid 'tupleSlot' to\nExecProcessReturning() and some (just one!) not. The latter case\napplies only to a foreign relations that are directly-modified and\nrelies on the FDWs having set econtext->ecxt_scantuple to the correct\nslot. In the 1st case, the callers should already have made sure that\nthe correct tableoid is passed through 'tableSlot', although\nFujita-san had noticed in [1] that my earlier patch failed to do that\nin the cross-partition update case. Along with fixing the problem of\nmy patch, he also proposed that we set the tableoid of the slot\nassigned to ecxt_scantuple only in the 2nd case to save cycles.\n\nI'm fine with leaving it the way it is as your updated patch does, but\nI don't really see a bug being introduced. Actually, the code was\nlike what Fujita-san proposed we do before b8d71745eac changed it to\nthe current way:\n\n@@ -166,20 +166,15 @@ ExecProcessReturning(ResultRelInfo *resultRelInfo,\n /* Make tuple and any needed join variables available to ExecProject */\n if (tupleSlot)\n econtext->ecxt_scantuple = tupleSlot;\n- else\n- {\n- HeapTuple tuple;\n-\n- /*\n- * RETURNING expressions might reference the tableoid column, so\n- * initialize t_tableOid before evaluating them.\n- */\n- Assert(!TupIsNull(econtext->ecxt_scantuple));\n- tuple = ExecFetchSlotHeapTuple(econtext->ecxt_scantuple, true, NULL);\n- tuple->t_tableOid = RelationGetRelid(resultRelInfo->ri_RelationDesc);\n- }\n econtext->ecxt_outertuple = planSlot;\n\n+ /*\n+ * RETURNING expressions might reference the tableoid column, so\n+ * reinitialize tts_tableOid before evaluating them.\n+ */\n+ econtext->ecxt_scantuple->tts_tableOid =\n+ RelationGetRelid(resultRelInfo->ri_RelationDesc);\n+\n /* Compute the RETURNING expressions */\n return ExecProject(projectReturning);\n }\n\n> We really need to decouple the RETURNING expressions (which\n> will belong to the source relation) from the value injected\n> for tableoid (which will belong to the destination).\n\nYeah, that makes sense.\n\n> * I really disliked the API change that ExecInsert is responsible\n> for computing RETURNING except when it isn't. That's confusing\n> and there's no good reason for it, since it's not really any\n> easier to deal with the case at the call site than inside ExecInsert.\n>\n> In the attached revision I made ExecInsert handle RETURNING\n> calculations by asking the callers to pass in the ResultRelInfo\n> that should be used for the purpose.\n\nThat seems fine to me.\n\n> We could alternatively\n> have taken the responsibility for RETURNING out of ExecInsert\n> altogether, making the callers call ExecProcessReturning.\n> I think that might have netted out slightly simpler than this.\n> But we're unlikely to apply such a change in HEAD, so it seemed\n> better to keep the division of responsibilities the same as it\n> is in other branches.\n\nI agree.\n\n> Thoughts?\n\nThe patch looks good to me, except one thing:\n\n /*\n+ * In a cross-partition UPDATE with RETURNING, we have to use the\n+ * source partition's RETURNING list, because that matches the output\n+ * of the planSlot, while the destination partition might have\n+ * different resjunk columns. This means we have to map the\n+ * destination slot back to the source's format so we can apply that\n+ * RETURNING list. This is expensive, but it should be an uncommon\n+ * corner case, so we won't spend much effort on making it fast.\n+ */\n+ if (returningRelInfo != resultRelInfo)\n+ {\n\nI think we should also add slot != srcSlot to this condition. The\nuncommon corner case is the source and/or the destination partitions\nhaving different column order than the root parent, thus requiring the\ntuple to be converted during tuple routing using slots appropriate to\neach relation, which causes 'slot' to end up different than 'srcSlot'.\nBut in the common case, where tuple descriptors match between all\ntables involved, 'slot' should be the same as 'srcSlot'.\n\n> (I've not looked at porting this to v12 or v11 yet.)\n\nI did that; patches attached. (I haven't changed them to incorporate\nthe above comment though.)\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CAPmGK14QXD5Te_vwGgpuVWXRcrC%2Bd8FyWse0aHSqnDDSeeCRFQ%40mail.gmail.com",
"msg_date": "Thu, 22 Apr 2021 13:54:51 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> I think we should also add slot != srcSlot to this condition.\n\nGood idea, should save useless comparisons of identical tupdescs.\nDone.\n\n>> (I've not looked at porting this to v12 or v11 yet.)\n\n> I did that; patches attached. (I haven't changed them to incorporate\n> the above comment though.)\n\nThanks for that, saved me some work. I've pushed these.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Apr 2021 11:49:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 12:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > I think we should also add slot != srcSlot to this condition.\n>\n> Good idea, should save useless comparisons of identical tupdescs.\n> Done.\n>\n> >> (I've not looked at porting this to v12 or v11 yet.)\n>\n> > I did that; patches attached. (I haven't changed them to incorporate\n> > the above comment though.)\n>\n> Thanks for that, saved me some work. I've pushed these.\n\nThanks for working on closing this and that other issue.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Apr 2021 10:08:16 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: problem with RETURNING and update row movement"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 10:08 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Apr 23, 2021 at 12:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> > >> (I've not looked at porting this to v12 or v11 yet.)\n> >\n> > > I did that; patches attached. (I haven't changed them to incorporate\n> > > the above comment though.)\n> >\n> > Thanks for that, saved me some work. I've pushed these.\n>\n> Thanks for working on closing this and that other issue.\n\nThanks to both of you for working on these!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 23 Apr 2021 10:49:30 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problem with RETURNING and update row movement"
}
] |
[
{
"msg_contents": "Commit 72b646033 inserted this into convertJsonbScalar:\n\n \t\t\tbreak;\n \n \t\tcase jbvNumeric:\n+\t\t\t/* replace numeric NaN with string \"NaN\" */\n+\t\t\tif (numeric_is_nan(scalarVal->val.numeric))\n+\t\t\t{\n+\t\t\t\tappendToBuffer(buffer, \"NaN\", 3);\n+\t\t\t\t*jentry = 3;\n+\t\t\t\tbreak;\n+\t\t\t}\n+\n \t\t\tnumlen = VARSIZE_ANY(scalarVal->val.numeric);\n \t\t\tpadlen = padBufferToInt(buffer);\n\nTo characterize this as hack, slash, and burn programming would be\ncharitable. It is entirely clear from the code, the documentation,\nand the relevant RFCs that JSONB does not allow NaNs as numeric\nvalues. So it should be impossible for this code to do anything.\n\nI tried taking it out, and found that this test case from the same\ncommit fails:\n\n+select jsonb_path_query('\"nan\"', '$.double()');\n+ jsonb_path_query \n+------------------\n+ \"NaN\"\n+(1 row)\n\nHowever, seeing that the JSON RFC disallows NaNs, I do not understand\nwhy it's important to accept this. The adjacent test case showing\nthat 'inf' isn't accepted:\n\n+select jsonb_path_query('\"inf\"', '$.double()');\n+ERROR: non-numeric SQL/JSON item\n+DETAIL: jsonpath item method .double() can only be applied to a numeric value\n\nseems like a saner approach.\n\nIn short, I think we should rip out the above code snippet and adjust\nexecuteItemOptUnwrapTarget, at about line jsonpath_exec.c:1076 as of HEAD,\nto reject NaNs the same way it already rejects infinities. Can you\nexplain why it was done like this?\n\n(The reason I came across this was that I'm working on extending\ntype numeric to allow infinities, and it was not clear what to\ndo here. But allowing a jsonb to contain a numeric NaN, even\ntransiently, seems like a completely horrid idea.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 08:45:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "jsonpath versus NaN"
},
{
"msg_contents": "Hi Tom,\n\nThank you for raising this issue.\n\nOn Thu, Jun 11, 2020 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Commit 72b646033 inserted this into convertJsonbScalar:\n>\n> break;\n>\n> case jbvNumeric:\n> + /* replace numeric NaN with string \"NaN\" */\n> + if (numeric_is_nan(scalarVal->val.numeric))\n> + {\n> + appendToBuffer(buffer, \"NaN\", 3);\n> + *jentry = 3;\n> + break;\n> + }\n> +\n> numlen = VARSIZE_ANY(scalarVal->val.numeric);\n> padlen = padBufferToInt(buffer);\n>\n> To characterize this as hack, slash, and burn programming would be\n> charitable. It is entirely clear from the code, the documentation,\n> and the relevant RFCs that JSONB does not allow NaNs as numeric\n> values.\n\nThe JSONB itself doesn't store number NaNs. It stores the string \"NaN\".\n\nI found the relevant part of the standard. Unfortunately, I can't\npost the full standard here due to its license, but I think I can cite\nthe relevant part.\n\n1) If JM specifies double, then For all j, 1 (one) ≤ j ≤ n,\nCase:\na) If Ij is not a number or character string, then let ST be data\nexception — non-numeric SQL/JSON item.\nb) Otherwise, let X be an SQL variable whose value is Ij. Let Vj be\nthe result of\nCAST (X AS DOUBLE PRECISION)\nIf this conversion results in an exception condition, then let ST be\nthat exception condition.\n\nSo, when we apply the .double() method to string, then the result\nshould be the same as if we cast string to double in SQL. In SQL we\nconvert string 'NaN' to numeric NaN. So, standard requires us to do\nthe same in SQL/JSON.\n\nI didn't find yet what the standard says about serializing NaNs back\nto JSON. I'll keep you posted.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 11 Jun 2020 21:41:00 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Thu, Jun 11, 2020 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It is entirely clear from the code, the documentation,\n>> and the relevant RFCs that JSONB does not allow NaNs as numeric\n>> values.\n\n> The JSONB itself doesn't store number NaNs. It stores the string \"NaN\".\n\nYeah, but you have a numeric NaN within the JsonbValue tree between\nexecuteItemOptUnwrapTarget and convertJsonbScalar. Who's to say that\nthat illegal-per-the-data-type structure won't escape to somewhere else?\nOr perhaps more likely, that we'll need additional warts in other random\nplaces in the JSON code to keep from spitting up on the transiently\ninvalid structure.\n\n> I found the relevant part of the standard. Unfortunately, I can't\n> post the full standard here due to its license, but I think I can cite\n> the relevant part.\n\nI don't think this is very relevant. The SQL standard has not got the\nconcepts of Inf or NaN either (see 4.4.2 Characteristics of numbers),\ntherefore their definition is only envisioning that a string representing\na normal finite number should be castable to DOUBLE PRECISION. Thus,\nboth of the relevant standards think that \"numbers\" are just finite\nnumbers.\n\nSo when neither JSON nor SQL consider that \"NaN\" is an allowed sort\nof number, why are you doing violence to the code to allow it in a\njsonpath? And if you insist on doing such violence, why didn't you\ndo some more and kluge it to the point where \"Inf\" would work too?\n(It would require slightly less klugery in the wake of the infinities-\nin-numeric patch that I'm going to post soon ... but that doesn't make\nit a good idea.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 15:00:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 10:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > On Thu, Jun 11, 2020 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It is entirely clear from the code, the documentation,\n> >> and the relevant RFCs that JSONB does not allow NaNs as numeric\n> >> values.\n>\n> > The JSONB itself doesn't store number NaNs. It stores the string \"NaN\".\n>\n> Yeah, but you have a numeric NaN within the JsonbValue tree between\n> executeItemOptUnwrapTarget and convertJsonbScalar. Who's to say that\n> that illegal-per-the-data-type structure won't escape to somewhere else?\n> Or perhaps more likely, that we'll need additional warts in other random\n> places in the JSON code to keep from spitting up on the transiently\n> invalid structure.\n\n\nI would propose to split two things: user-visible behavior and\ninternal implementation. Internal implementation, which allows\nnumeric NaN within the JsonbValue, isn't perfect and we could improve\nit. But I'd like to determine desired user-visible behavior first,\nthen we can decide how to fix the implementation.\n\n>\n> > I found the relevant part of the standard. Unfortunately, I can't\n> > post the full standard here due to its license, but I think I can cite\n> > the relevant part.\n>\n> I don't think this is very relevant. The SQL standard has not got the\n> concepts of Inf or NaN either (see 4.4.2 Characteristics of numbers),\n> therefore their definition is only envisioning that a string representing\n> a normal finite number should be castable to DOUBLE PRECISION. Thus,\n> both of the relevant standards think that \"numbers\" are just finite\n> numbers.\n>\n> So when neither JSON nor SQL consider that \"NaN\" is an allowed sort\n> of number, why are you doing violence to the code to allow it in a\n> jsonpath?\n\nYes, I see. No standard insists we should support NaN. However,\nstandard claims .double() should behave the same as CAST to double.\nSo, I think if CAST supports NaN, but .double() doesn't, it's still a\nviolation.\n\n> And if you insist on doing such violence, why didn't you\n> do some more and kluge it to the point where \"Inf\" would work too?\n\n\nYep, according to standard .double() should support \"Inf\" as soon as\nCAST to double does. The reason why it wasn't implemented is that we\nuse numeric as the internal storage for all the numbers. And numeric\ndoesn't support Inf yet.\n\n> (It would require slightly less klugery in the wake of the infinities-\n> in-numeric patch that I'm going to post soon ... but that doesn't make\n> it a good idea.)\n\n\nIf numerics would support infinites, we can follow standard and make\n.double() method work the same way as CAST to double does. Now, I get\nthat there is no much reason to keep current behaviour, which supports\nNan, but doesn't support Inf. I think we should either support both\nNaN and Inf and don't support any of them. The latter is a violation\nof the standard, but provides us with a simpler and cleaner\nimplementation. What do you think?\n\nBTW, we found what the standard says about serialization of SQL/JSON items.\n\n9.37 Serializing an SQL/JSON item (page 695)\nii) Let JV be an implementation-dependent value of type TT and\nencoding ENC such that these two conditions hold:\n1) JV is a JSON text.\n2) When applying the General Rules of Subclause 9.36, “Parsing JSON\ntext” with JV as JSON TEXT, FO as FORMAT OPTION, and WITHOUT UNIQUE\nKEYS as UNIQUENESS CONSTRAINT, the returned STATUS is successful\ncompletion and the returned SQL/JSON ITEM is an SQL/JSON item that is\nequivalent to SJI.\nIf there is no such JV, then let ST be the exception condition: data\nexception — invalid JSON text.\n\nBasically it says that the resulting text should result in the same\nSQL/JSON item when parsed. I think this literally means that\nserialization of numeric NaN is impossible as soon as it's impossible\nto get numeric NaN as the result json parsing. However, in the same\nway this would mean that serialization of datetime is also impossible,\nbut that seems like nonsense. So, I think this paragraph of the\nstandard is ill-conceived.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 15 Jun 2020 13:50:11 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Thu, Jun 11, 2020 at 10:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't think this is very relevant. The SQL standard has not got the\n>> concepts of Inf or NaN either (see 4.4.2 Characteristics of numbers),\n>> therefore their definition is only envisioning that a string representing\n>> a normal finite number should be castable to DOUBLE PRECISION. Thus,\n>> both of the relevant standards think that \"numbers\" are just finite\n>> numbers.\n\n> Yes, I see. No standard insists we should support NaN. However,\n> standard claims .double() should behave the same as CAST to double.\n> So, I think if CAST supports NaN, but .double() doesn't, it's still a\n> violation.\n\nNo, I think you are completely misunderstanding the standard. They\nare saying that strings that look like legal numbers according to SQL\nshould be castable into numbers. But NaN and Inf are not legal\nnumbers according to SQL, so there is nothing in that text that\njustifies accepting \"NaN\". Nor does the JSON standard provide any\nsupport for that position. So I think it is fine to leave NaN/Inf\nout of the world of what you can write in jsonpath.\n\nI'd be more willing to let the code do this if it didn't require such\na horrid, dangerous kluge to do so. But it does, and I don't see any\neasy way around that, so I think we should just take out the kluge.\nAnd do so sooner not later, before some misguided user starts to\ndepend on it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 11:33:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 6:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > On Thu, Jun 11, 2020 at 10:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I don't think this is very relevant. The SQL standard has not got the\n> >> concepts of Inf or NaN either (see 4.4.2 Characteristics of numbers),\n> >> therefore their definition is only envisioning that a string\n> representing\n> >> a normal finite number should be castable to DOUBLE PRECISION. Thus,\n> >> both of the relevant standards think that \"numbers\" are just finite\n> >> numbers.\n>\n> > Yes, I see. No standard insists we should support NaN. However,\n> > standard claims .double() should behave the same as CAST to double.\n> > So, I think if CAST supports NaN, but .double() doesn't, it's still a\n> > violation.\n>\n> No, I think you are completely misunderstanding the standard. They\n> are saying that strings that look like legal numbers according to SQL\n> should be castable into numbers. But NaN and Inf are not legal\n> numbers according to SQL, so there is nothing in that text that\n> justifies accepting \"NaN\". Nor does the JSON standard provide any\n> support for that position. So I think it is fine to leave NaN/Inf\n> out of the world of what you can write in jsonpath.\n>\n\nrfc and sql json forbid Nan and Inf ( Technical Report is freely available,\nhttps://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip\n)\n\nPage 10 JSON terminology.\n“A sequence comprising an integer part, optionally followed by a fractional\npart and/or\nan exponent part (non-numeric values, such as infinity and NaN are not\npermitted)”\n\n\n>\n> I'd be more willing to let the code do this if it didn't require such\n> a horrid, dangerous kluge to do so. But it does, and I don't see any\n> easy way around that, so I think we should just take out the kluge.\n> And do so sooner not later, before some misguided user starts to\n> depend on it.\n>\n\nThe problem is that we tried to find a trade-off between standard and\npostgres\nimplementation, for example, in postgres CAST allows NaN and Inf, and SQL\nStandard\nrequires .double should works as CAST.\n\n SELECT 'nan'::real, 'inf'::real;\n float4 | float4\n--------+----------\n NaN | Infinity\n(1 row)\n\n\n\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Wed, Jun 17, 2020 at 6:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Thu, Jun 11, 2020 at 10:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't think this is very relevant. The SQL standard has not got the\n>> concepts of Inf or NaN either (see 4.4.2 Characteristics of numbers),\n>> therefore their definition is only envisioning that a string representing\n>> a normal finite number should be castable to DOUBLE PRECISION. Thus,\n>> both of the relevant standards think that \"numbers\" are just finite\n>> numbers.\n\n> Yes, I see. No standard insists we should support NaN. However,\n> standard claims .double() should behave the same as CAST to double.\n> So, I think if CAST supports NaN, but .double() doesn't, it's still a\n> violation.\n\nNo, I think you are completely misunderstanding the standard. They\nare saying that strings that look like legal numbers according to SQL\nshould be castable into numbers. But NaN and Inf are not legal\nnumbers according to SQL, so there is nothing in that text that\njustifies accepting \"NaN\". Nor does the JSON standard provide any\nsupport for that position. So I think it is fine to leave NaN/Inf\nout of the world of what you can write in jsonpath. rfc and sql json forbid Nan and Inf ( Technical Report is freely available, https://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip)Page 10 JSON terminology.“A sequence comprising an integer part, optionally followed by a fractional part and/oran exponent part (non-numeric values, such as infinity and NaN are not permitted)” \n\nI'd be more willing to let the code do this if it didn't require such\na horrid, dangerous kluge to do so. But it does, and I don't see any\neasy way around that, so I think we should just take out the kluge.\nAnd do so sooner not later, before some misguided user starts to\ndepend on it.The problem is that we tried to find a trade-off between standard and postgresimplementation, for example, in postgres CAST allows NaN and Inf, and SQL Standardrequires .double should works as CAST. SELECT 'nan'::real, 'inf'::real; float4 | float4--------+---------- NaN | Infinity(1 row) \n\n regards, tom lane\n\n\n-- Postgres Professional: http://www.postgrespro.comThe Russian Postgres Company",
"msg_date": "Thu, 18 Jun 2020 18:51:04 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Oleg Bartunov <obartunov@postgrespro.ru> writes:\n> The problem is that we tried to find a trade-off between standard and\n> postgres implementation, for example, in postgres CAST allows NaN and\n> Inf, and SQL Standard requires .double should works as CAST.\n\nAs I said, I think this is a fundamental misreading of the standard.\nThe way I read it is that it requires the set of values that are legal\naccording to the standard to be processed the same way as CAST would.\n\nWhile we certainly *could* choose to extend jsonpath, and/or jsonb\nitself, to allow NaN/Inf, I do not think that it's sane to argue that\nthe standard requires us to do that; the wording in the opposite\ndirection is pretty clear. Also, I do not find it convincing to\nextend jsonpath that way when we haven't extended jsonb. Quite aside\nfrom the ensuing code warts, what in the world is the use-case?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:07:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 11:51 AM Oleg Bartunov <obartunov@postgrespro.ru> wrote:\n> The problem is that we tried to find a trade-off between standard and postgres\n> implementation, for example, in postgres CAST allows NaN and Inf, and SQL Standard\n> requires .double should works as CAST.\n\nIt seems like the right thing is to implement the standard, not to\nimplement whatever PostgreSQL happens to do in other cases. I can't\nhelp feeling like re-using the numeric data type for other things has\nled to this confusion. I think that fails in other cases, too: like\nwhat if you have a super-long integer that can't be represented as a\nnumeric? I bet jsonb will fail, or maybe it will convert it to a\nstring, but I don't see how it can do anything else.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:24:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Tom,\n\nOn Thu, Jun 18, 2020 at 7:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Oleg Bartunov <obartunov@postgrespro.ru> writes:\n> > The problem is that we tried to find a trade-off between standard and\n> > postgres implementation, for example, in postgres CAST allows NaN and\n> > Inf, and SQL Standard requires .double should works as CAST.\n>\n> As I said, I think this is a fundamental misreading of the standard.\n> The way I read it is that it requires the set of values that are legal\n> according to the standard to be processed the same way as CAST would.\n\nThank you for your answer. I'm trying to understand your point.\nStandard claims that .double() method should behave the same way as\nCAST to double. However, standard references the standard behavior of\nCAST here, not behavior of your implementation of CAST. So, if we\nextend the functionality of standard CAST in our implementation, that\ndoesn't automatically mean we should extend the .double() jsonpath\nmethod in the same way. Is it correct?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 19:34:32 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 18, 2020 at 11:51 AM Oleg Bartunov <obartunov@postgrespro.ru> wrote:\n>> The problem is that we tried to find a trade-off between standard and postgres\n>> implementation, for example, in postgres CAST allows NaN and Inf, and SQL Standard\n>> requires .double should works as CAST.\n\n> It seems like the right thing is to implement the standard, not to\n> implement whatever PostgreSQL happens to do in other cases. I can't\n> help feeling like re-using the numeric data type for other things has\n> led to this confusion. I think that fails in other cases, too: like\n> what if you have a super-long integer that can't be represented as a\n> numeric? I bet jsonb will fail, or maybe it will convert it to a\n> string, but I don't see how it can do anything else.\n\nActually, the JSON spec explicitly says that any number that doesn't fit\nin an IEEE double isn't portable [1]. So we're already very far above and\nbeyond the spec's requirements by using numeric. We don't need to improve\non that. But I concur with your point that just because PG does X in\nsome other cases doesn't mean that we must do X in json or jsonpath.\n\n\t\t\tregards, tom lane\n\n[1] https://tools.ietf.org/html/rfc7159#page-6\n\n This specification allows implementations to set limits on the range\n and precision of numbers accepted. Since software that implements\n IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is\n generally available and widely used, good interoperability can be\n achieved by implementations that expect no more precision or range\n than these provide, in the sense that implementations will\n approximate JSON numbers within the expected precision. A JSON\n number such as 1E400 or 3.141592653589793238462643383279 may indicate\n potential interoperability problems, since it suggests that the\n software that created it expects receiving software to have greater\n capabilities for numeric magnitude and precision than is widely\n available.\n\n Note that when such software is used, numbers that are integers and\n are in the range [-(2**53)+1, (2**53)-1] are interoperable in the\n sense that implementations will agree exactly on their numeric\n values.\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:35:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 7:34 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Thank you for your answer. I'm trying to understand your point.\n> Standard claims that .double() method should behave the same way as\n> CAST to double. However, standard references the standard behavior of\n> CAST here, not behavior of your implementation of CAST.\n\nTypo here: please read \"our implementation of CAST\" here.\n\n> So, if we\n> extend the functionality of standard CAST in our implementation, that\n> doesn't automatically mean we should extend the .double() jsonpath\n> method in the same way. Is it correct?\n\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 19:36:02 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> Thank you for your answer. I'm trying to understand your point.\n> Standard claims that .double() method should behave the same way as\n> CAST to double. However, standard references the standard behavior of\n> CAST here, not behavior of your implementation of CAST. So, if we\n> extend the functionality of standard CAST in our implementation, that\n> doesn't automatically mean we should extend the .double() jsonpath\n> method in the same way. Is it correct?\n\nRight. We could, if we chose, extend jsonpath to allow Inf/NaN, but\nI don't believe there's an argument that the spec requires us to.\n\nAlso the larger point is that it doesn't make sense to extend jsonpath\nthat way when we haven't extended json(b) that way. This code wart\nwouldn't exist were it not for that inconsistency. Also, I find it hard\nto see why anyone would have a use for NaN in a jsonpath test when they\ncan't write NaN in the input json data, nor have it be correctly reflected\ninto output json data either.\n\nMaybe there's a case for extending json(b) that way; it wouldn't be so\ndifferent from the work I'm doing nearby to extend type numeric for\ninfinities. But we'd have to have a conversation about whether\ninteroperability with other JSON implementations is worth sacrificing\nto improve consistency with our float and numeric datatypes. In the\nmeantime, though, we aren't allowing Inf/NaN in json(b) so I don't think\njsonpath should accept them either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:45:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 7:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > Thank you for your answer. I'm trying to understand your point.\n> > Standard claims that .double() method should behave the same way as\n> > CAST to double. However, standard references the standard behavior of\n> > CAST here, not behavior of your implementation of CAST. So, if we\n> > extend the functionality of standard CAST in our implementation, that\n> > doesn't automatically mean we should extend the .double() jsonpath\n> > method in the same way. Is it correct?\n>\n> Right. We could, if we chose, extend jsonpath to allow Inf/NaN, but\n> I don't believe there's an argument that the spec requires us to.\n>\n> Also the larger point is that it doesn't make sense to extend jsonpath\n> that way when we haven't extended json(b) that way. This code wart\n> wouldn't exist were it not for that inconsistency. Also, I find it hard\n> to see why anyone would have a use for NaN in a jsonpath test when they\n> can't write NaN in the input json data, nor have it be correctly reflected\n> into output json data either.\n\nOk, I got the point. I have nothing against removing support of NaN\nin jsonpath as far as it doesn't violates the standard. I'm going to\nwrite the patch for this.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 20:04:10 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "\nOn 6/18/20 12:35 PM, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Thu, Jun 18, 2020 at 11:51 AM Oleg Bartunov <obartunov@postgrespro.ru> wrote:\n>>> The problem is that we tried to find a trade-off between standard and postgres\n>>> implementation, for example, in postgres CAST allows NaN and Inf, and SQL Standard\n>>> requires .double should works as CAST.\n>> It seems like the right thing is to implement the standard, not to\n>> implement whatever PostgreSQL happens to do in other cases. I can't\n>> help feeling like re-using the numeric data type for other things has\n>> led to this confusion. I think that fails in other cases, too: like\n>> what if you have a super-long integer that can't be represented as a\n>> numeric? I bet jsonb will fail, or maybe it will convert it to a\n>> string, but I don't see how it can do anything else.\n> Actually, the JSON spec explicitly says that any number that doesn't fit\n> in an IEEE double isn't portable [1]. So we're already very far above and\n> beyond the spec's requirements by using numeric. We don't need to improve\n> on that. But I concur with your point that just because PG does X in\n> some other cases doesn't mean that we must do X in json or jsonpath.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://tools.ietf.org/html/rfc7159#page-6\n>\n> This specification allows implementations to set limits on the range\n> and precision of numbers accepted. Since software that implements\n> IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is\n> generally available and widely used, good interoperability can be\n> achieved by implementations that expect no more precision or range\n> than these provide, in the sense that implementations will\n> approximate JSON numbers within the expected precision. A JSON\n> number such as 1E400 or 3.141592653589793238462643383279 may indicate\n> potential interoperability problems, since it suggests that the\n> software that created it expects receiving software to have greater\n> capabilities for numeric magnitude and precision than is widely\n> available.\n>\n> Note that when such software is used, numbers that are integers and\n> are in the range [-(2**53)+1, (2**53)-1] are interoperable in the\n> sense that implementations will agree exactly on their numeric\n> values.\n>\n\n\nJust to complete the historical record, that standard wasn't published\nat the time we created the JSON type, and the then existing standard\n(rfc4627) contains no such statement. We felt it was important to be\nable to represent any Postgres data value in as natural a manner as\npossible given the constraints of JSON. rfc7159 was published just as we\nwere finalizing 9.4 with JSONB, although I'm not sure it made a heavy\nimpact on our consciousness. If it had I would still not have wanted to\nimpose any additional limitation on numerics. If you want portable\nnumbers cast the numeric to double before producing the JSON.\n\nISTR having a conversation about the extended use of jsonb in jsonpath a\nwhile back, although I don't remember if that was on or off list. I know\nit troubled me some.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 18 Jun 2020 16:53:15 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 8:04 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Thu, Jun 18, 2020 at 7:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > > Thank you for your answer. I'm trying to understand your point.\n> > > Standard claims that .double() method should behave the same way as\n> > > CAST to double. However, standard references the standard behavior of\n> > > CAST here, not behavior of your implementation of CAST. So, if we\n> > > extend the functionality of standard CAST in our implementation, that\n> > > doesn't automatically mean we should extend the .double() jsonpath\n> > > method in the same way. Is it correct?\n> >\n> > Right. We could, if we chose, extend jsonpath to allow Inf/NaN, but\n> > I don't believe there's an argument that the spec requires us to.\n> >\n> > Also the larger point is that it doesn't make sense to extend jsonpath\n> > that way when we haven't extended json(b) that way. This code wart\n> > wouldn't exist were it not for that inconsistency. Also, I find it hard\n> > to see why anyone would have a use for NaN in a jsonpath test when they\n> > can't write NaN in the input json data, nor have it be correctly reflected\n> > into output json data either.\n>\n> Ok, I got the point. I have nothing against removing support of NaN\n> in jsonpath as far as it doesn't violates the standard. I'm going to\n> write the patch for this.\n\nThe patchset is attached, sorry for the delay.\n\nThe first patch improves error messages, which appears to be unclear\nfor me. If one applies .double() method to a numeric value, we\nrestrict that this numeric value should fit to double precision type.\nIf it doesn't fit, the current error message just says the following.\n\nERROR: jsonpath item method .double() can only be applied to a numeric value\n\nBut that's confusing, because .double() method is naturally applied to\na numeric value. Patch makes this message explicitly report that\nnumeric value is out of range for double type. This patch also adds\ntest exercising this error. When string can't be converted to double\nprecision, I think it's better to explicitly say that we expected\nvalid string representation of double precision type.\n\nSecond patch forbids to convert NaN using .double() method. As I get,\nNaN can't be result of any jsonpath computations assuming there is no\nNaN input. So, I just put an assert to convertJsonbScalar() ensuring\nthere is no NaN in JsonbValue.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 6 Jul 2020 15:19:21 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Mon, Jul 6, 2020 at 3:19 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> The patchset is attached, sorry for the delay.\n>\n> The first patch improves error messages, which appears to be unclear\n> for me. If one applies .double() method to a numeric value, we\n> restrict that this numeric value should fit to double precision type.\n> If it doesn't fit, the current error message just says the following.\n>\n> ERROR: jsonpath item method .double() can only be applied to a numeric value\n>\n> But that's confusing, because .double() method is naturally applied to\n> a numeric value. Patch makes this message explicitly report that\n> numeric value is out of range for double type. This patch also adds\n> test exercising this error. When string can't be converted to double\n> precision, I think it's better to explicitly say that we expected\n> valid string representation of double precision type.\n>\n> Second patch forbids to convert NaN using .double() method. As I get,\n> NaN can't be result of any jsonpath computations assuming there is no\n> NaN input. So, I just put an assert to convertJsonbScalar() ensuring\n> there is no NaN in JsonbValue.\n\nI'm going to push 0002 if there is no objection.\n\nRegarding 0001, I think my new error messages need review.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 8 Jul 2020 01:12:38 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I'm going to push 0002 if there is no objection.\n> Regarding 0001, I think my new error messages need review.\n\nI do intend to review these, just didn't get to it yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jul 2020 18:16:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Wed, Jul 8, 2020 at 1:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I'm going to push 0002 if there is no objection.\n> > Regarding 0001, I think my new error messages need review.\n>\n> I do intend to review these, just didn't get to it yet.\n\nOK, that you for noticing. I wouldn't push anything before your review.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 8 Jul 2020 01:17:45 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> The patchset is attached, sorry for the delay.\n\n> The first patch improves error messages, which appears to be unclear\n> for me. If one applies .double() method to a numeric value, we\n> restrict that this numeric value should fit to double precision type.\n> If it doesn't fit, the current error message just says the following.\n\n> ERROR: jsonpath item method .double() can only be applied to a numeric value\n\n> But that's confusing, because .double() method is naturally applied to\n> a numeric value. Patch makes this message explicitly report that\n> numeric value is out of range for double type. This patch also adds\n> test exercising this error. When string can't be converted to double\n> precision, I think it's better to explicitly say that we expected\n> valid string representation of double precision type.\n\nI see your point here, but the English of the replacement error messages\ncould be improved. I suggest respectively\n\nnumeric argument of jsonpath item method .%s() is out of range for type double precision\n\nstring argument of jsonpath item method .%s() is not a valid representation of a double precision number\n\nAs for 0002, I'd rather see the convertJsonbScalar() code changed back\nto the way it was, ie just\n\n \t\tcase jbvNumeric:\n \t\t\tnumlen = VARSIZE_ANY(scalarVal->val.numeric);\n \t\t\tpadlen = padBufferToInt(buffer);\n\t\t\t...\n\nThere is no argument for having an its-not-NaN assertion here when\nthere aren't similar assertions throughout the jsonb code.\n\nAlso, it seems like it'd be smart to reject isinf() and isnan() results\nfrom float8in_internal_opt_error in both executeItemOptUnwrapTarget code\npaths, ie numeric source as well as string source. Yeah, we don't expect\nto see those cases in a jbvNumeric (so I wouldn't change the error message\ntext), but it's cheap insurance.\n\nNo other comments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jul 2020 18:20:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Thu, Jul 9, 2020 at 1:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > The patchset is attached, sorry for the delay.\n>\n> > The first patch improves error messages, which appears to be unclear\n> > for me. If one applies .double() method to a numeric value, we\n> > restrict that this numeric value should fit to double precision type.\n> > If it doesn't fit, the current error message just says the following.\n>\n> > ERROR: jsonpath item method .double() can only be applied to a numeric value\n>\n> > But that's confusing, because .double() method is naturally applied to\n> > a numeric value. Patch makes this message explicitly report that\n> > numeric value is out of range for double type. This patch also adds\n> > test exercising this error. When string can't be converted to double\n> > precision, I think it's better to explicitly say that we expected\n> > valid string representation of double precision type.\n>\n> I see your point here, but the English of the replacement error messages\n> could be improved. I suggest respectively\n>\n> numeric argument of jsonpath item method .%s() is out of range for type double precision\n>\n> string argument of jsonpath item method .%s() is not a valid representation of a double precision number\n\nGood, thank you for corrections!\n\n> As for 0002, I'd rather see the convertJsonbScalar() code changed back\n> to the way it was, ie just\n>\n> case jbvNumeric:\n> numlen = VARSIZE_ANY(scalarVal->val.numeric);\n> padlen = padBufferToInt(buffer);\n> ...\n>\n> There is no argument for having an its-not-NaN assertion here when\n> there aren't similar assertions throughout the jsonb code.\n>\n> Also, it seems like it'd be smart to reject isinf() and isnan() results\n> from float8in_internal_opt_error in both executeItemOptUnwrapTarget code\n> paths, ie numeric source as well as string source. Yeah, we don't expect\n> to see those cases in a jbvNumeric (so I wouldn't change the error message\n> text), but it's cheap insurance.\n\nOK, corrected as you proposed.\n\n> No other comments.\n\nRevised patches are attached.\n\nI understand both patches as fixes and propose to backpatch them to 12\nif no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 9 Jul 2020 04:04:19 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "On Thu, Jul 9, 2020 at 4:04 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I understand both patches as fixes and propose to backpatch them to 12\n> if no objections.\n\nBoth patches are pushed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 11 Jul 2020 03:26:23 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonpath versus NaN"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Thu, Jul 9, 2020 at 4:04 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> I understand both patches as fixes and propose to backpatch them to 12\n>> if no objections.\n\n> Both patches are pushed.\n\nThanks for taking care of that!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jul 2020 13:46:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: jsonpath versus NaN"
}
] |
[
{
"msg_contents": "Hi,\nsrc/backend/commands/sequence.c\nHas two shadows (buf var), with two unnecessary variables declared.\n\nFor readability reasons, the declaration of variable names in the\nprototypes was also corrected.\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 11 Jun 2020 09:49:27 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] fix two shadow vars (src/backend/commands/sequence.c)"
},
{
"msg_contents": "On 2020-Jun-11, Ranier Vilela wrote:\n\n> Hi,\n> src/backend/commands/sequence.c\n> Has two shadows (buf var), with two unnecessary variables declared.\n\nThese are not unnecessary -- removing them breaks translatability of\nthose messages. If these were ssize_t you could use '%zd' (see commit\nac4ef637ad2f) but I don't think you can in this case.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 11 Jun 2020 16:19:21 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix two shadow vars (src/backend/commands/sequence.c)"
},
{
"msg_contents": "Em qui., 11 de jun. de 2020 às 17:19, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Jun-11, Ranier Vilela wrote:\n>\n> > Hi,\n> > src/backend/commands/sequence.c\n> > Has two shadows (buf var), with two unnecessary variables declared.\n>\n> These are not unnecessary -- removing them breaks translatability of\n> those messages. If these were ssize_t you could use '%zd' (see commit\n> ac4ef637ad2f) but I don't think you can in this case.\n>\nHi Alvaro, thanks for reply.\n\nFile backend\\utils\\sort\\tuplesort.c:\nelog(LOG, \"worker %d using \" INT64_FORMAT \" KB of memory for read buffers\namong %d input tapes\",\nFile backend\\storage\\ipc\\shm_toc.c:\nelog(ERROR, \"could not find key \" UINT64_FORMAT \" in shm TOC at %p\",\nFile backend\\storage\\large_object\\inv_api.c:\n* use errmsg_internal here because we don't want to expose INT64_FORMAT\nerrmsg_internal(\"invalid large object seek target: \" INT64_FORMAT,\n\nelog and errmsg_internal, permits use as proposed by the patch,\ndoes it mean that errmsg, does not allow and does not do the same job as\nsnprintf?\n\nregards,\nRanier Vilela\n\nEm qui., 11 de jun. de 2020 às 17:19, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Jun-11, Ranier Vilela wrote:\n\n> Hi,\n> src/backend/commands/sequence.c\n> Has two shadows (buf var), with two unnecessary variables declared.\n\nThese are not unnecessary -- removing them breaks translatability of\nthose messages. If these were ssize_t you could use '%zd' (see commit\nac4ef637ad2f) but I don't think you can in this case.Hi Alvaro, thanks for reply.File backend\\utils\\sort\\tuplesort.c:\t\telog(LOG, \"worker %d using \" INT64_FORMAT \" KB of memory for read buffers among %d input tapes\",File backend\\storage\\ipc\\shm_toc.c:\t\telog(ERROR, \"could not find key \" UINT64_FORMAT \" in shm TOC at %p\",File backend\\storage\\large_object\\inv_api.c:\t * use errmsg_internal here because we don't want to expose INT64_FORMAT\t\t\t\t errmsg_internal(\"invalid large object seek target: \" INT64_FORMAT, elog and errmsg_internal, permits use as proposed by the patch,does it mean that errmsg, does not allow and does not do the same job as snprintf?regards,Ranier Vilela",
"msg_date": "Thu, 11 Jun 2020 19:03:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] fix two shadow vars (src/backend/commands/sequence.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> elog and errmsg_internal, permits use as proposed by the patch,\n> does it mean that errmsg, does not allow and does not do the same job as\n> snprintf?\n\nYes. errmsg() strings are captured for translation. If they contain\nplatform-dependent substrings, that's a problem, because only one variant\nwill get captured. And INT64_FORMAT is platform-dependent.\n\nWe have of late decided that it's safe to use %lld (or %llu) to format\nint64s everywhere, but you then have to cast the printf argument to\nmatch that explicitly. See commit 6a1cd8b92 for precedent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 18:54:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix two shadow vars (src/backend/commands/sequence.c)"
},
{
"msg_contents": "Em qui., 11 de jun. de 2020 às 19:54, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > elog and errmsg_internal, permits use as proposed by the patch,\n> > does it mean that errmsg, does not allow and does not do the same job as\n> > snprintf?\n>\n> Yes. errmsg() strings are captured for translation. If they contain\n> platform-dependent substrings, that's a problem, because only one variant\n> will get captured. And INT64_FORMAT is platform-dependent.\n>\n> We have of late decided that it's safe to use %lld (or %llu) to format\n> int64s everywhere, but you then have to cast the printf argument to\n> match that explicitly. See commit 6a1cd8b92 for precedent.\n>\nHi Tom, thank you for the detailed explanation.\n\nI see commit 6a1cd8b92, and I think which is the same case with\nbasebackup.c (total_checksum_failures),\nmaxv and minv, are int64 (INT64_FORMAT).\n\n%lld -> (long long int) maxv\n%lld -> (long long int) minv\n\nAttached new patch, with fixes from commit 6a1cd8b92.\n\nregards,\nRanier Vilela\n\n\n>\n> regards, tom lane\n>",
"msg_date": "Thu, 11 Jun 2020 22:20:09 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] fix two shadow vars (src/backend/commands/sequence.c)"
}
] |
[
{
"msg_contents": "Hi,\nLatest HEAD, fails with windows regress tests.\n\n float8 ... FAILED 517 ms\n partition_prune ... FAILED 3085 ms\n\nregards,\nRanier VIlela",
"msg_date": "Thu, 11 Jun 2020 09:52:54 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Windows regress fails (latest HEAD)"
},
{
"msg_contents": "\nOn 6/11/20 8:52 AM, Ranier Vilela wrote:\n> Hi,\n> Latest HEAD, fails with windows regress tests.\n>\n> float8 ... FAILED 517 ms\n> partition_prune ... FAILED 3085 ms\n>\n>\n\nRanier,\n\n\nThe first thing you should do when you find this is to see if there is a\nbuildfarm report of the failure. If there isn't then try to work out why.\n\nAlso, when making a report like this, it is essential to let us know\nthings like:\n\n * which commit is causing the failure (git bisect is good for finding\n this)\n * what Windows version you're testing on\n * which compiler you're using\n * which configuration settings you're using\n * what are the regression differences you get in the failing tests\n\nWithout that we can't do anything with your report because it just\ndoesn't have enough information\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jun 2020 09:01:52 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> escreveu:\n\n>\n> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n> > Hi,\n> > Latest HEAD, fails with windows regress tests.\n> >\n> > float8 ... FAILED 517 ms\n> > partition_prune ... FAILED 3085 ms\n> >\n> >\n>\n> The first thing you should do when you find this is to see if there is a\n> buildfarm report of the failure. If there isn't then try to work out why.\n>\nSorry, I will have to research the buildfarm, I have no reference to it.\n\n\n>\n> Also, when making a report like this, it is essential to let us know\n> things like:\n>\n> * which commit is causing the failure (git bisect is good for finding\n> this)\n>\nThanks for hit (git bisect).\n\n\n> * what Windows version you're testing on\n>\nWindows 10 (2004)\n\n * which compiler you're using\n>\nmsvc 2019 (64 bits)\n\n * which configuration settings you're using\n>\nNone. Only build with default configuration (release is default).\n\n * what are the regression differences you get in the failing tests\n>\nIt was sent as an attachment.\n\nregards,\nRanier Vilela\n\nEm qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\nOn 6/11/20 8:52 AM, Ranier Vilela wrote:\n> Hi,\n> Latest HEAD, fails with windows regress tests.\n>\n> float8 ... FAILED 517 ms\n> partition_prune ... FAILED 3085 ms\n>\n>\nThe first thing you should do when you find this is to see if there is a\nbuildfarm report of the failure. If there isn't then try to work out why.Sorry, I will have to research the buildfarm, I have no reference to it. \n\nAlso, when making a report like this, it is essential to let us know\nthings like:\n\n * which commit is causing the failure (git bisect is good for finding\n this)Thanks for hit (git bisect). \n * what Windows version you're testing onWindows 10 (2004) \n * which compiler you're usingmsvc 2019 (64 bits) \n * which configuration settings you're usingNone. Only build with default configuration (release is default). \n * what are the regression differences you get in the failing testsIt was sent as an attachment.regards,Ranier Vilela",
"msg_date": "Thu, 11 Jun 2020 10:28:56 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "\nOn 6/11/20 9:28 AM, Ranier Vilela wrote:\n> Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> escreveu:\n>\n>\n> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n> > Hi,\n> > Latest HEAD, fails with windows regress tests.\n> >\n> > float8 ... FAILED 517 ms\n> > partition_prune ... FAILED 3085 ms\n> >\n> >\n>\n> The first thing you should do when you find this is to see if\n> there is a\n> buildfarm report of the failure. If there isn't then try to work\n> out why.\n>\n> Sorry, I will have to research the buildfarm, I have no reference to it.\n> \n\n\nSee https://buildfarm.postgresql.org\n\n\n>\n> * which configuration settings you're using\n>\n> None. Only build with default configuration (release is default).\n\n\nWe would also need to see the contents of your config.pl\n\n\n>\n> * what are the regression differences you get in the failing tests\n>\n> It was sent as an attachment.\n>\n>\n\nIf you send attachments please refer to them in the body of your email,\notherwise they are easily missed, as I did.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jun 2020 09:36:19 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "Em qui., 11 de jun. de 2020 às 10:36, Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> escreveu:\n\n>\n> On 6/11/20 9:28 AM, Ranier Vilela wrote:\n> > Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com\n> > <mailto:andrew.dunstan@2ndquadrant.com>> escreveu:\n> >\n> >\n> > On 6/11/20 8:52 AM, Ranier Vilela wrote:\n> > > Hi,\n> > > Latest HEAD, fails with windows regress tests.\n> > >\n> > > float8 ... FAILED 517 ms\n> > > partition_prune ... FAILED 3085 ms\n> > >\n> > >\n> >\n> > The first thing you should do when you find this is to see if\n> > there is a\n> > buildfarm report of the failure. If there isn't then try to work\n> > out why.\n> >\n> > Sorry, I will have to research the buildfarm, I have no reference to it.\n> >\n>\n>\n> See https://buildfarm.postgresql.org\n\nOk.\n\n\n>\n>\n> >\n> > * which configuration settings you're using\n> >\n> > None. Only build with default configuration (release is default).\n>\n>\n> We would also need to see the contents of your config.pl\n\nIt seems that I am missing something, there is no config.pl, anywhere in\nthe postgres directory.\n\n\n>\n>\n>\n> >\n> > * what are the regression differences you get in the failing tests\n> >\n> > It was sent as an attachment.\n> >\n> >\n>\n> If you send attachments please refer to them in the body of your email,\n> otherwise they are easily missed, as I did.\n>\nAh yes, ok, I'll remember that.\n\nregards,\nRanier Vilela\n\nEm qui., 11 de jun. de 2020 às 10:36, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\nOn 6/11/20 9:28 AM, Ranier Vilela wrote:\n> Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> escreveu:\n>\n>\n> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n> > Hi,\n> > Latest HEAD, fails with windows regress tests.\n> >\n> > float8 ... FAILED 517 ms\n> > partition_prune ... FAILED 3085 ms\n> >\n> >\n>\n> The first thing you should do when you find this is to see if\n> there is a\n> buildfarm report of the failure. If there isn't then try to work\n> out why.\n>\n> Sorry, I will have to research the buildfarm, I have no reference to it.\n> \n\n\nSee https://buildfarm.postgresql.orgOk. \n\n\n>\n> * which configuration settings you're using\n>\n> None. Only build with default configuration (release is default).\n\n\nWe would also need to see the contents of your config.plIt seems that I am missing something, there is no config.pl, anywhere in the postgres directory. \n\n\n>\n> * what are the regression differences you get in the failing tests\n>\n> It was sent as an attachment.\n>\n>\n\nIf you send attachments please refer to them in the body of your email,\notherwise they are easily missed, as I did.Ah yes, ok, I'll remember that.regards,Ranier Vilela",
"msg_date": "Thu, 11 Jun 2020 10:40:09 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "\nOn 6/11/20 9:40 AM, Ranier Vilela wrote:\n> Em qui., 11 de jun. de 2020 às 10:36, Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> escreveu:\n>\n>\n>\n>\n>\n> >\n> > * which configuration settings you're using\n> >\n> > None. Only build with default configuration (release is default).\n>\n>\n> We would also need to see the contents of your config.pl\n> <http://config.pl>\n>\n> It seems that I am missing something, there is no config.pl\n> <http://config.pl>, anywhere in the postgres directory.\n\n\n\nIf there isn't one it uses config_default.pl.\n\n\nSee src/tools/msvc/README which includes this snippet:\n\n\n The tools for building PostgreSQL using Microsoft Visual Studio\n currently\n consist of the following files:\n\n - Configuration files -\n config_default.pl default configuration arguments\n\n A typical build environment has two more files, buildenv.pl and\n config.pl\n that contain the user's build environment settings and configuration\n arguments.\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jun 2020 10:00:51 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "Em qui., 11 de jun. de 2020 às 11:00, Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> escreveu:\n\n>\n> On 6/11/20 9:40 AM, Ranier Vilela wrote:\n> > Em qui., 11 de jun. de 2020 às 10:36, Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com\n> > <mailto:andrew.dunstan@2ndquadrant.com>> escreveu:\n> >\n> >\n> >\n> >\n> >\n> > >\n> > > * which configuration settings you're using\n> > >\n> > > None. Only build with default configuration (release is default).\n> >\n> >\n> > We would also need to see the contents of your config.pl\n> > <http://config.pl>\n> >\n> > It seems that I am missing something, there is no config.pl\n> > <http://config.pl>, anywhere in the postgres directory.\n>\n>\n>\n> If there isn't one it uses config_default.pl.\n>\nI see.\nAs I did a clean build, from github (git clone), there is no config.pl, so,\nwas using the same file.\n(\nhttps://github.com/postgres/postgres/blob/master/src/tools/msvc/config_default.pl\n)\n\nregards,\nRanier VIlela\n\nEm qui., 11 de jun. de 2020 às 11:00, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\nOn 6/11/20 9:40 AM, Ranier Vilela wrote:\n> Em qui., 11 de jun. de 2020 às 10:36, Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> escreveu:\n>\n>\n>\n>\n>\n> >\n> > * which configuration settings you're using\n> >\n> > None. Only build with default configuration (release is default).\n>\n>\n> We would also need to see the contents of your config.pl\n> <http://config.pl>\n>\n> It seems that I am missing something, there is no config.pl\n> <http://config.pl>, anywhere in the postgres directory.\n\n\n\nIf there isn't one it uses config_default.pl.I see.As I did a clean build, from github (git clone), there is no config.pl, so, was using the same file.(https://github.com/postgres/postgres/blob/master/src/tools/msvc/config_default.pl)regards,Ranier VIlela",
"msg_date": "Thu, 11 Jun 2020 11:05:01 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": ">>>>> \"Ranier\" == Ranier Vilela <ranier.vf@gmail.com> writes:\n\n Ranier> Hi,\n Ranier> Latest HEAD, fails with windows regress tests.\n\n Ranier> three | f1 | sqrt_f1 \n Ranier> -------+----------------------+-----------------------\n Ranier> | 1004.3 | 31.6906926399535\n Ranier> - | 1.2345678901234e+200 | 1.11111110611109e+100\n Ranier> + | 1.2345678901234e+200 | 1.11111110611108e+100\n Ranier> | 1.2345678901234e-200 | 1.11111110611109e-100\n Ranier> (3 rows)\n\nThis error is a surprisingly large one. Normally one expects sqrt to be\naccurate to within half an ulp, i.e. accurate to the limits of the\nformat, though the regression test avoids actually making this\nassumption. But in this case the true output we expect is:\n\n1.111111106111085536...e+100\n\nfor which the closest representable float8 is\n\n1.111111106111085583...e+100 (= 0x1.451DCD2E3ACAFp+332)\n\nwhich should round (since we're doing this test with\nextra_float_digits=0) to\n\n1.11111110611109e+100\n\nThe nearest value that would round to 1.11111110611108e+100 would be\n1.1111111061110848e+100 (= 0x1.451DCD2E3ACABp+332), which is a\ndifference of not less than 4 ulps from the expected value.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 11 Jun 2020 15:57:56 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "Em qui., 11 de jun. de 2020 às 10:28, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <\n> andrew.dunstan@2ndquadrant.com> escreveu:\n>\n>>\n>> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n>> > Hi,\n>> > Latest HEAD, fails with windows regress tests.\n>> >\n>> > float8 ... FAILED 517 ms\n>> > partition_prune ... FAILED 3085 ms\n>> >\n>> >\n>>\n>> The first thing you should do when you find this is to see if there is a\n>> buildfarm report of the failure. If there isn't then try to work out why.\n>>\n> Sorry, I will have to research the buildfarm, I have no reference to it.\n>\n>\n>>\n>> Also, when making a report like this, it is essential to let us know\n>> things like:\n>>\n>> * which commit is causing the failure (git bisect is good for finding\n>> this)\n>>\n> Thanks for hit (git bisect).\n>\n>\n>> * what Windows version you're testing on\n>>\n> Windows 10 (2004)\n>\n> * which compiler you're using\n>>\n> msvc 2019 (64 bits)\n>\n\nOnly for registry, if anyone else is using msvc 2019.\nI'm using latest msvc 2019 64 bits (16.6.0)\nProblably this is a compiler optimization bug.\nvcregress check with build DEBUG, pass all 200 tests.\n\nregards,\nRanier Vilela\n\nEm qui., 11 de jun. de 2020 às 10:28, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\nOn 6/11/20 8:52 AM, Ranier Vilela wrote:\n> Hi,\n> Latest HEAD, fails with windows regress tests.\n>\n> float8 ... FAILED 517 ms\n> partition_prune ... FAILED 3085 ms\n>\n>\nThe first thing you should do when you find this is to see if there is a\nbuildfarm report of the failure. If there isn't then try to work out why.Sorry, I will have to research the buildfarm, I have no reference to it. \n\nAlso, when making a report like this, it is essential to let us know\nthings like:\n\n * which commit is causing the failure (git bisect is good for finding\n this)Thanks for hit (git bisect). \n * what Windows version you're testing onWindows 10 (2004) \n * which compiler you're usingmsvc 2019 (64 bits)Only for registry, if anyone else is using msvc 2019.I'm using latest msvc 2019 64 bits (16.6.0)Problably this is a compiler optimization bug.vcregress check with build DEBUG, pass all 200 tests.regards,Ranier Vilela",
"msg_date": "Fri, 26 Jun 2020 08:21:48 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "Em sex., 26 de jun. de 2020 às 08:21, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qui., 11 de jun. de 2020 às 10:28, Ranier Vilela <ranier.vf@gmail.com>\n> escreveu:\n>\n>> Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <\n>> andrew.dunstan@2ndquadrant.com> escreveu:\n>>\n>>>\n>>> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n>>> > Hi,\n>>> > Latest HEAD, fails with windows regress tests.\n>>> >\n>>> > float8 ... FAILED 517 ms\n>>> > partition_prune ... FAILED 3085 ms\n>>> >\n>>> >\n>>>\n>>> The first thing you should do when you find this is to see if there is a\n>>> buildfarm report of the failure. If there isn't then try to work out why.\n>>>\n>> Sorry, I will have to research the buildfarm, I have no reference to it.\n>>\n>>\n>>>\n>>> Also, when making a report like this, it is essential to let us know\n>>> things like:\n>>>\n>>> * which commit is causing the failure (git bisect is good for finding\n>>> this)\n>>>\n>> Thanks for hit (git bisect).\n>>\n>>\n>>> * what Windows version you're testing on\n>>>\n>> Windows 10 (2004)\n>>\n>> * which compiler you're using\n>>>\n>> msvc 2019 (64 bits)\n>>\n>\n> Only for registry, if anyone else is using msvc 2019.\n> I'm using latest msvc 2019 64 bits (16.6.0)\n> Problably this is a compiler optimization bug.\n> vcregress check with build DEBUG, pass all 200 tests.\n>\nWith the current HEAD, the regression float8 in release mode (msvc 2019 64\nbits) is gone.\nMaybe it's this commit:\nhttps://github.com/postgres/postgres/commit/0aa8f764088ea0f36620ae2955fa6c54ec736c46\n\nBut (partition_prune) persists.\npartition_prune ... FAILED\n\nregards,\nRanier Vilela\n\nEm sex., 26 de jun. de 2020 às 08:21, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qui., 11 de jun. de 2020 às 10:28, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\nOn 6/11/20 8:52 AM, Ranier Vilela wrote:\n> Hi,\n> Latest HEAD, fails with windows regress tests.\n>\n> float8 ... FAILED 517 ms\n> partition_prune ... FAILED 3085 ms\n>\n>\nThe first thing you should do when you find this is to see if there is a\nbuildfarm report of the failure. If there isn't then try to work out why.Sorry, I will have to research the buildfarm, I have no reference to it. \n\nAlso, when making a report like this, it is essential to let us know\nthings like:\n\n * which commit is causing the failure (git bisect is good for finding\n this)Thanks for hit (git bisect). \n * what Windows version you're testing onWindows 10 (2004) \n * which compiler you're usingmsvc 2019 (64 bits)Only for registry, if anyone else is using msvc 2019.I'm using latest msvc 2019 64 bits (16.6.0)Problably this is a compiler optimization bug.vcregress check with build DEBUG, pass all 200 tests.With the current HEAD, the regression float8 in release mode (msvc 2019 64 bits) is gone.Maybe it's this commit:https://github.com/postgres/postgres/commit/0aa8f764088ea0f36620ae2955fa6c54ec736c46But (partition_prune) persists.\npartition_prune ... FAILED\n\n regards,Ranier Vilela",
"msg_date": "Thu, 10 Sep 2020 09:04:05 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 12:03 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sex., 26 de jun. de 2020 às 08:21, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>\n>> Em qui., 11 de jun. de 2020 às 10:28, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>>\n>>> Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\n>>>>\n>>>>\n>>>> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n>>>> > Hi,\n>>>> > Latest HEAD, fails with windows regress tests.\n>>>> >\n>>>> > float8 ... FAILED 517 ms\n>>>> > partition_prune ... FAILED 3085 ms\n>>>> >\n>>>> >\n>>>>\n>>>> The first thing you should do when you find this is to see if there is a\n>>>> buildfarm report of the failure. If there isn't then try to work out why.\n>>>\n>>> Sorry, I will have to research the buildfarm, I have no reference to it.\n>>>\n>>>>\n>>>>\n>>>> Also, when making a report like this, it is essential to let us know\n>>>> things like:\n>>>>\n>>>> * which commit is causing the failure (git bisect is good for finding\n>>>> this)\n>>>\n>>> Thanks for hit (git bisect).\n>>>\n>>>>\n>>>> * what Windows version you're testing on\n>>>\n>>> Windows 10 (2004)\n>>>\n>>>> * which compiler you're using\n>>>\n>>> msvc 2019 (64 bits)\n>>\n>>\n>> Only for registry, if anyone else is using msvc 2019.\n>> I'm using latest msvc 2019 64 bits (16.6.0)\n>> Problably this is a compiler optimization bug.\n>> vcregress check with build DEBUG, pass all 200 tests.\n>\n> With the current HEAD, the regression float8 in release mode (msvc 2019 64 bits) is gone.\n> Maybe it's this commit:\n> https://github.com/postgres/postgres/commit/0aa8f764088ea0f36620ae2955fa6c54ec736c46\n>\n> But (partition_prune) persists.\n> partition_prune ... FAILED\n>\n> regards,\n> Ranier Vilela\n\nI am also experiencing this issue on one of my Windows machines (x64)\nusing 12.4. I believe this is new, possibly since 12.2. It doesn't\noccur on another machine though, which is strange. It appears to be\nthe same diff output. Is it possible that the given result is also\nvalid for this test?\n\nRussell",
"msg_date": "Tue, 10 Nov 2020 12:16:18 -0500",
"msg_from": "Russell Foster <russell.foster.coding@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "\nOn 11/10/20 6:16 PM, Russell Foster wrote:\n> On Tue, Nov 10, 2020 at 12:03 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>\n>> Em sex., 26 de jun. de 2020 às 08:21, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>>\n>>> Em qui., 11 de jun. de 2020 às 10:28, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>>>\n>>>> Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\n>>>>>\n>>>>>\n>>>>> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n>>>>>> Hi,\n>>>>>> Latest HEAD, fails with windows regress tests.\n>>>>>>\n>>>>>> float8 ... FAILED 517 ms\n>>>>>> partition_prune ... FAILED 3085 ms\n>>>>>>\n>>>>>>\n>>>>>\n>>>>> The first thing you should do when you find this is to see if there is a\n>>>>> buildfarm report of the failure. If there isn't then try to work out why.\n>>>>\n>>>> Sorry, I will have to research the buildfarm, I have no reference to it.\n>>>>\n>>>>>\n>>>>>\n>>>>> Also, when making a report like this, it is essential to let us know\n>>>>> things like:\n>>>>>\n>>>>> * which commit is causing the failure (git bisect is good for finding\n>>>>> this)\n>>>>\n>>>> Thanks for hit (git bisect).\n>>>>\n>>>>>\n>>>>> * what Windows version you're testing on\n>>>>\n>>>> Windows 10 (2004)\n>>>>\n>>>>> * which compiler you're using\n>>>>\n>>>> msvc 2019 (64 bits)\n>>>\n>>>\n>>> Only for registry, if anyone else is using msvc 2019.\n>>> I'm using latest msvc 2019 64 bits (16.6.0)\n>>> Problably this is a compiler optimization bug.\n>>> vcregress check with build DEBUG, pass all 200 tests.\n>>\n>> With the current HEAD, the regression float8 in release mode (msvc 2019 64 bits) is gone.\n>> Maybe it's this commit:\n>> https://github.com/postgres/postgres/commit/0aa8f764088ea0f36620ae2955fa6c54ec736c46\n>>\n>> But (partition_prune) persists.\n>> partition_prune ... FAILED\n>>\n>> regards,\n>> Ranier Vilela\n> \n> I am also experiencing this issue on one of my Windows machines (x64)\n> using 12.4. I believe this is new, possibly since 12.2. It doesn't\n> occur on another machine though, which is strange. It appears to be\n> the same diff output. Is it possible that the given result is also\n> valid for this test?\n> \n\nThat's unlikely, I think. The regression tests are constructed so that\nthe estimates are stable. It's more likely this is some difference in\nrounding behavior, for example. I wonder which msvc builds are used on\nthe machines that fail/pass the tests, and if the compiler flags are the\nsame.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 10 Nov 2020 18:42:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> That's unlikely, I think. The regression tests are constructed so that\n> the estimates are stable. It's more likely this is some difference in\n> rounding behavior, for example.\n\nThe reported delta is in the actual row count, not an estimate.\nHow could that be subject to roundoff issues? And there's no\ndelta in the Append's inputs, so this seems like it's a flat-out\nmiss of a row count in EXPLAIN ANALYZE.\n\n> I wonder which msvc builds are used on\n> the machines that fail/pass the tests, and if the compiler flags are the\n> same.\n\nYeah, this might be a fruitful way to figure out \"what's different\nfrom the buildfarm\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Nov 2020 13:15:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 1:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > That's unlikely, I think. The regression tests are constructed so that\n> > the estimates are stable. It's more likely this is some difference in\n> > rounding behavior, for example.\n>\n> The reported delta is in the actual row count, not an estimate.\n> How could that be subject to roundoff issues? And there's no\n> delta in the Append's inputs, so this seems like it's a flat-out\n> miss of a row count in EXPLAIN ANALYZE.\n>\n> > I wonder which msvc builds are used on\n> > the machines that fail/pass the tests, and if the compiler flags are the\n> > same.\n>\n> Yeah, this might be a fruitful way to figure out \"what's different\n> from the buildfarm\".\n>\n> regards, tom lane\n\nHmm..anyway I can help here? I don't believe I am using any special\ncompile options. I am using VS 2019. The thing is, both systems I have\nuse the same build. I call msvcvars to set up the environment:\n\n\"%ProgramFiles(x86)%\\Microsoft Visual\nStudio\\2019\\Enterprise\\VC\\Auxiliary\\Build\\vcvarsall.bat\" amd64\n\nI also saw some recent changes have been made around these tests, so I\ncan try the latest too.\n\n\n",
"msg_date": "Tue, 10 Nov 2020 13:43:33 -0500",
"msg_from": "Russell Foster <russell.foster.coding@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "\n\n\nOn 11/10/20 7:15 PM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> That's unlikely, I think. The regression tests are constructed so that\n>> the estimates are stable. It's more likely this is some difference in\n>> rounding behavior, for example.\n> \n> The reported delta is in the actual row count, not an estimate.\n> How could that be subject to roundoff issues? And there's no\n> delta in the Append's inputs, so this seems like it's a flat-out\n> miss of a row count in EXPLAIN ANALYZE.\n> \n\nMy bad. I've not noticed it's EXPLAIN ANALYZE (COSTS OFF) so I thought\nit's estimates. You're right this can't be a roundoff error.\n\n>> I wonder which msvc builds are used on the machines that fail/pass\n>> the tests, and if the compiler flags are the same.\n> \n> Yeah, this might be a fruitful way to figure out \"what's different\n> from the buildfarm\".\n> \n\nYeah. Also Russell claims to have two \"same\" machines out of which one\nworks and the other one fails.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 10 Nov 2020 19:52:48 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "Em ter., 10 de nov. de 2020 às 14:16, Russell Foster <\nrussell.foster.coding@gmail.com> escreveu:\n\n> On Tue, Nov 10, 2020 at 12:03 PM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> >\n> > Em sex., 26 de jun. de 2020 às 08:21, Ranier Vilela <ranier.vf@gmail.com>\n> escreveu:\n> >>\n> >> Em qui., 11 de jun. de 2020 às 10:28, Ranier Vilela <\n> ranier.vf@gmail.com> escreveu:\n> >>>\n> >>> Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <\n> andrew.dunstan@2ndquadrant.com> escreveu:\n> >>>>\n> >>>>\n> >>>> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n> >>>> > Hi,\n> >>>> > Latest HEAD, fails with windows regress tests.\n> >>>> >\n> >>>> > float8 ... FAILED 517 ms\n> >>>> > partition_prune ... FAILED 3085 ms\n> >>>> >\n> >>>> >\n> >>>>\n> >>>> The first thing you should do when you find this is to see if there\n> is a\n> >>>> buildfarm report of the failure. If there isn't then try to work out\n> why.\n> >>>\n> >>> Sorry, I will have to research the buildfarm, I have no reference to\n> it.\n> >>>\n> >>>>\n> >>>>\n> >>>> Also, when making a report like this, it is essential to let us know\n> >>>> things like:\n> >>>>\n> >>>> * which commit is causing the failure (git bisect is good for\n> finding\n> >>>> this)\n> >>>\n> >>> Thanks for hit (git bisect).\n> >>>\n> >>>>\n> >>>> * what Windows version you're testing on\n> >>>\n> >>> Windows 10 (2004)\n> >>>\n> >>>> * which compiler you're using\n> >>>\n> >>> msvc 2019 (64 bits)\n> >>\n> >>\n> >> Only for registry, if anyone else is using msvc 2019.\n> >> I'm using latest msvc 2019 64 bits (16.6.0)\n> >> Problably this is a compiler optimization bug.\n> >> vcregress check with build DEBUG, pass all 200 tests.\n> >\n> > With the current HEAD, the regression float8 in release mode (msvc 2019\n> 64 bits) is gone.\n> > Maybe it's this commit:\n> >\n> https://github.com/postgres/postgres/commit/0aa8f764088ea0f36620ae2955fa6c54ec736c46\n> >\n> > But (partition_prune) persists.\n> > partition_prune ... FAILED\n> >\n> > regards,\n> > Ranier Vilela\n>\n> I am also experiencing this issue on one of my Windows machines (x64)\n> using 12.4. I believe this is new, possibly since 12.2. It doesn't\n> occur on another machine though, which is strange. It appears to be\n> the same diff output. Is it possible that the given result is also\n> valid for this test?\n>\nHi Russel,\nIn DEBUG mode, the issue is gone (all 202 tests pass).\nYou can be sure yourself.\nI think that compiler code generation bug...\n\nRanier Vilela\n\nEm ter., 10 de nov. de 2020 às 14:16, Russell Foster <russell.foster.coding@gmail.com> escreveu:On Tue, Nov 10, 2020 at 12:03 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sex., 26 de jun. de 2020 às 08:21, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>\n>> Em qui., 11 de jun. de 2020 às 10:28, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>>\n>>> Em qui., 11 de jun. de 2020 às 10:01, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\n>>>>\n>>>>\n>>>> On 6/11/20 8:52 AM, Ranier Vilela wrote:\n>>>> > Hi,\n>>>> > Latest HEAD, fails with windows regress tests.\n>>>> >\n>>>> > float8 ... FAILED 517 ms\n>>>> > partition_prune ... FAILED 3085 ms\n>>>> >\n>>>> >\n>>>>\n>>>> The first thing you should do when you find this is to see if there is a\n>>>> buildfarm report of the failure. If there isn't then try to work out why.\n>>>\n>>> Sorry, I will have to research the buildfarm, I have no reference to it.\n>>>\n>>>>\n>>>>\n>>>> Also, when making a report like this, it is essential to let us know\n>>>> things like:\n>>>>\n>>>> * which commit is causing the failure (git bisect is good for finding\n>>>> this)\n>>>\n>>> Thanks for hit (git bisect).\n>>>\n>>>>\n>>>> * what Windows version you're testing on\n>>>\n>>> Windows 10 (2004)\n>>>\n>>>> * which compiler you're using\n>>>\n>>> msvc 2019 (64 bits)\n>>\n>>\n>> Only for registry, if anyone else is using msvc 2019.\n>> I'm using latest msvc 2019 64 bits (16.6.0)\n>> Problably this is a compiler optimization bug.\n>> vcregress check with build DEBUG, pass all 200 tests.\n>\n> With the current HEAD, the regression float8 in release mode (msvc 2019 64 bits) is gone.\n> Maybe it's this commit:\n> https://github.com/postgres/postgres/commit/0aa8f764088ea0f36620ae2955fa6c54ec736c46\n>\n> But (partition_prune) persists.\n> partition_prune ... FAILED\n>\n> regards,\n> Ranier Vilela\n\nI am also experiencing this issue on one of my Windows machines (x64)\nusing 12.4. I believe this is new, possibly since 12.2. It doesn't\noccur on another machine though, which is strange. It appears to be\nthe same diff output. Is it possible that the given result is also\nvalid for this test?Hi Russel,In DEBUG mode, the issue is gone (all 202 tests pass).You can be sure yourself. I think that compiler code generation bug...Ranier Vilela",
"msg_date": "Tue, 10 Nov 2020 18:19:21 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 1:52 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n>\n> On 11/10/20 7:15 PM, Tom Lane wrote:\n> > Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> >> That's unlikely, I think. The regression tests are constructed so that\n> >> the estimates are stable. It's more likely this is some difference in\n> >> rounding behavior, for example.\n> >\n> > The reported delta is in the actual row count, not an estimate.\n> > How could that be subject to roundoff issues? And there's no\n> > delta in the Append's inputs, so this seems like it's a flat-out\n> > miss of a row count in EXPLAIN ANALYZE.\n> >\n>\n> My bad. I've not noticed it's EXPLAIN ANALYZE (COSTS OFF) so I thought\n> it's estimates. You're right this can't be a roundoff error.\n>\n> >> I wonder which msvc builds are used on the machines that fail/pass\n> >> the tests, and if the compiler flags are the same.\n> >\n> > Yeah, this might be a fruitful way to figure out \"what's different\n> > from the buildfarm\".\n> >\n>\n> Yeah. Also Russell claims to have two \"same\" machines out of which one\n> works and the other one fails.\n\nNever claimed they were the same, but they are both Windows x64. Here\nare some more details:\n\nTest Passes:\nVM machine (Build Server)\nMicrosoft Windows 10 Pro\nVersion 10.0.18363 Build 18363\nMicrosoft (R) C/C++ Optimizing Compiler Version 19.27.29112 for x64\n\nCompile args:\nC:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Enterprise\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX86\\x64\\CL.exe\n/c /Isrc/include /Isrc/include/port/win32\n/Isrc/include/port/win32_msvc /Iexternals/zlib\n/Iexternals/openssl\\include /Isrc/backend /Zi /nologo /W3 /WX-\n/diagnostics:column /Ox /D WIN32 /D _WINDOWS /D __WINDOWS__ /D\n__WIN32__ /D EXEC_BACKEND /D WIN32_STACK_RLIMIT=4194304 /D\n_CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D BUILDING_DLL\n/D _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n/Zc:inline /Fo\".\\Release\\postgres\\\\\" /Fd\".\\Release\\postgres\\vc142.pdb\"\n/Gd /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /FC\n/errorReport:queue /MP src/backend/catalog/partition.c\n\nTest Fails:\nLaptop machine (Development)\nMicrosoft Windows 10 Enterprise\nVersion 10.0.19041 Build 19041\nMicrosoft (R) C/C++ Optimizing Compiler Version 19.27.29112 for x64\n\nCompile args:\nC:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Enterprise\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX86\\x64\\CL.exe\n/c /Isrc/include /Isrc/include/port/win32\n/Isrc/include/port/win32_msvc /Iexternals/zlib\n/Iexternals/openssl\\include /Isrc/backend /Zi /nologo /W3 /WX-\n/diagnostics:column /Ox /D WIN32 /D _WINDOWS /D __WINDOWS__ /D\n__WIN32__ /D EXEC_BACKEND /D WIN32_STACK_RLIMIT=4194304 /D\n_CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D BUILDING_DLL\n/D _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n/Zc:inline /Fo\".\\Release\\postgres\\\\\" /Fd\".\\Release\\postgres\\vc142.pdb\"\n/Gd /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /FC\n/errorReport:queue /MP src/backend/catalog/partition.c\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nCompiler versions are the same and the build args seem the same. Also,\ntried with the latest REL_12_STABLE and REL_13_STABLE, and they both\nfail on my dev machine as well.\n\nI'm not going to pretend to know why the explain should be equal in\nthis case, but I wonder what difference it would make if the output\nfor both is equal and/or correct? From parttion_prune.out on the\nfailing machine, line 2810:\n\n-- Multiple partitions\ninsert into tbl1 values (1001), (1010), (1011);\nexplain (analyze, costs off, summary off, timing off)\nselect * from tbl1 inner join tprt on tbl1.col1 > tprt.col1;\n QUERY PLAN\n--------------------------------------------------------------------------\n Nested Loop (actual rows=23 loops=1)\n -> Seq Scan on tbl1 (actual rows=5 loops=1)\n -> Append (actual rows=5 loops=5)\n -> Index Scan using tprt1_idx on tprt_1 (actual rows=2 loops=5)\n Index Cond: (col1 < tbl1.col1)\n -> Index Scan using tprt2_idx on tprt_2 (actual rows=3 loops=4)\n Index Cond: (col1 < tbl1.col1)\n -> Index Scan using tprt3_idx on tprt_3 (actual rows=1 loops=2)\n Index Cond: (col1 < tbl1.col1)\n -> Index Scan using tprt4_idx on tprt_4 (never executed)\n Index Cond: (col1 < tbl1.col1)\n -> Index Scan using tprt5_idx on tprt_5 (never executed)\n Index Cond: (col1 < tbl1.col1)\n -> Index Scan using tprt6_idx on tprt_6 (never executed)\n Index Cond: (col1 < tbl1.col1)\n(15 rows)\n\nexplain (analyze, costs off, summary off, timing off)\nselect * from tbl1 inner join tprt on tbl1.col1 = tprt.col1;\n QUERY PLAN\n--------------------------------------------------------------------------\n Nested Loop (actual rows=3 loops=1)\n -> Seq Scan on tbl1 (actual rows=5 loops=1)\n -> Append (actual rows=0 loops=5)\n -> Index Scan using tprt1_idx on tprt_1 (never executed)\n Index Cond: (col1 = tbl1.col1)\n -> Index Scan using tprt2_idx on tprt_2 (actual rows=1 loops=2)\n Index Cond: (col1 = tbl1.col1)\n -> Index Scan using tprt3_idx on tprt_3 (actual rows=0 loops=3)\n Index Cond: (col1 = tbl1.col1)\n -> Index Scan using tprt4_idx on tprt_4 (never executed)\n Index Cond: (col1 = tbl1.col1)\n -> Index Scan using tprt5_idx on tprt_5 (never executed)\n Index Cond: (col1 = tbl1.col1)\n -> Index Scan using tprt6_idx on tprt_6 (never executed)\n Index Cond: (col1 = tbl1.col1)\n(15 rows)\n\nselect tbl1.col1, tprt.col1 from tbl1\ninner join tprt on tbl1.col1 > tprt.col1\norder by tbl1.col1, tprt.col1;\n col1 | col1\n------+------\n 501 | 10\n 501 | 20\n 505 | 10\n 505 | 20\n 505 | 501\n 505 | 502\n 1001 | 10\n 1001 | 20\n 1001 | 501\n 1001 | 502\n 1001 | 505\n 1010 | 10\n 1010 | 20\n 1010 | 501\n 1010 | 502\n 1010 | 505\n 1010 | 1001\n 1011 | 10\n 1011 | 20\n 1011 | 501\n 1011 | 502\n 1011 | 505\n 1011 | 1001\n(23 rows)\n\nselect tbl1.col1, tprt.col1 from tbl1\ninner join tprt on tbl1.col1 = tprt.col1\norder by tbl1.col1, tprt.col1;\n col1 | col1\n------+------\n 501 | 501\n 505 | 505\n 1001 | 1001\n(3 rows)\n\nHere the selects return the same values as the expected.\n\n\n",
"msg_date": "Tue, 10 Nov 2020 16:22:37 -0500",
"msg_from": "Russell Foster <russell.foster.coding@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 04:22:37PM -0500, Russell Foster wrote:\n> Never claimed they were the same, but they are both Windows x64. Here\n> are some more details:\n> \n> Test Passes:\n> VM machine (Build Server)\n> Microsoft Windows 10 Pro\n> Version 10.0.18363 Build 18363\n> Microsoft (R) C/C++ Optimizing Compiler Version 19.27.29112 for x64\n> \n> Compile args:\n> C:\\Program Files (x86)\\Microsoft Visual\n> Studio\\2019\\Enterprise\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX86\\x64\\CL.exe\n> /c /Isrc/include /Isrc/include/port/win32\n> /Isrc/include/port/win32_msvc /Iexternals/zlib\n> /Iexternals/openssl\\include /Isrc/backend /Zi /nologo /W3 /WX-\n> /diagnostics:column /Ox /D WIN32 /D _WINDOWS /D __WINDOWS__ /D\n> __WIN32__ /D EXEC_BACKEND /D WIN32_STACK_RLIMIT=4194304 /D\n> _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D BUILDING_DLL\n> /D _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n> /Zc:inline /Fo\".\\Release\\postgres\\\\\" /Fd\".\\Release\\postgres\\vc142.pdb\"\n> /Gd /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /FC\n> /errorReport:queue /MP src/backend/catalog/partition.c\n\ndrongo is the closest thing we have in the buildfarm for this setup.\nHere is the boring option part when it comes to Postgres compilation:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2020-11-10%2018%3A59%3A20&stg=make\n C:\\\\Program Files (x86)\\\\Microsoft Visual\n Studio\\\\2019\\\\BuildTools\\\\VC\\\\Tools\\\\MSVC\\\\14.23.28105\\\\bin\\\\HostX86\\\\x64\\\\CL.exe\n /c /Isrc/include /Isrc/include/port/win32\n /Isrc/include/port/win32_msvc /Ic:\\\\prog\\\\3p64\\\\include\n /I\"C:\\\\Program Files\\\\OpenSSL-Win64\\\\include\" /Isrc/backend /Zi\n /nologo /W3 /WX- /diagnostics:column /Ox /D WIN32 /D _WINDOWS /D\n __WINDOWS__ /D __WIN32__ /D WIN32_STACK_RLIMIT=4194304 /D\n _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D\n BUILDING_DLL /D _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t\n /Zc:forScope /Zc:inline /Fo\".\\\\Release\\\\postgres\\\\\\\\\"\n /Fd\".\\\\Release\\\\postgres\\\\vc142.pdb\" /Gd /TC /wd4018 /wd4244 /wd4273\n /wd4102 /wd4090 /wd4267 /FC /errorReport:queue /MP [ source files ]\n [...]\n\nSo except if I am missing something we have an exact match here.\n\n> Test Fails:\n> Laptop machine (Development)\n> Microsoft Windows 10 Enterprise\n> Version 10.0.19041 Build 19041\n> Microsoft (R) C/C++ Optimizing Compiler Version 19.27.29112 for x64\n> \n> Compile args:\n> C:\\Program Files (x86)\\Microsoft Visual\n> Studio\\2019\\Enterprise\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX86\\x64\\CL.exe\n> /c /Isrc/include /Isrc/include/port/win32\n> /Isrc/include/port/win32_msvc /Iexternals/zlib\n> /Iexternals/openssl\\include /Isrc/backend /Zi /nologo /W3 /WX-\n> /diagnostics:column /Ox /D WIN32 /D _WINDOWS /D __WINDOWS__ /D\n> __WIN32__ /D EXEC_BACKEND /D WIN32_STACK_RLIMIT=4194304 /D\n> _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D BUILDING_DLL\n> /D _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n> /Zc:inline /Fo\".\\Release\\postgres\\\\\" /Fd\".\\Release\\postgres\\vc142.pdb\"\n> /Gd /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /FC\n> /errorReport:queue /MP src/backend/catalog/partition.c\n\nAnd this configuration matches exactly what you have with the host\nwhere the test passed.\n\nNow I do see a difference in the Windows 10 build involved, 10.0.19041\nfails but 10.0.18363 passes. I find rather hard to buy that this is\ndirectly a Postgres bug. The compiler version is the same, so the\nissue seems to be related to the way the code compiled is\ninterpreted.\n--\nMichael",
"msg_date": "Wed, 11 Nov 2020 10:04:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
},
{
"msg_contents": "Hello hackers,\n11.11.2020 04:04, Michael Paquier wrote:\n> And this configuration matches exactly what you have with the host\n> where the test passed.\n>\n> Now I do see a difference in the Windows 10 build involved, 10.0.19041\n> fails but 10.0.18363 passes. I find rather hard to buy that this is\n> directly a Postgres bug. The compiler version is the same, so the\n> issue seems to be related to the way the code compiled is\n> interpreted.\n> --\n> Michael\nI've managed to reproduce that fail on Windows 10 Build 19042.631 (20H2).\nThe \"actual rows\" value printed there is calculated as:\ndouble��� ��� rows = planstate->instrument->ntuples / nloops;\nand with a simple debugging code, I've found that\nplanstate->instrument->ntuples in that case is 3, and nloops is 5. So\nrows = 0.6.\n\nSurprisingly, printf(\"%.0f\", 0.6); in this Windows build prints 0.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 6 Feb 2021 08:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows regress fails (latest HEAD)"
}
] |
[
{
"msg_contents": "While testing Pavel's patch for pg_dump --filter, I got:\n\npg_dump: error: could not write to output file: Success\n[pryzbyj@database postgresql]$ echo $?\n1\n\nI see we tried to fix it few years ago:\nhttps://www.postgresql.org/message-id/flat/1498120508308.9826%40infotecs.ru\nhttps://www.postgresql.org/message-id/flat/20160125143008.2539.2878%40wrigleys.postgresql.org\nhttps://www.postgresql.org/message-id/20160307.174354.251049100.horiguchi.kyotaro@lab.ntt.co.jp\nhttps://www.postgresql.org/message-id/20150608174336.GM133018@postgresql.org\n\nCommits:\n4d57e83816778c6f61ea35c697f937a6f9c3c3de\n9a3b5d3ad0f1c19c47e2ee65b372344cb0616c9a\n\nThis patch fixes it for me\npg_dump: error: could not write to output file: No space left on device\n\n--- a/src/bin/pg_dump/pg_backup_directory.c\n+++ b/src/bin/pg_dump/pg_backup_directory.c\n@@ -347,8 +347,12 @@ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)\n lclContext *ctx = (lclContext *) AH->formatData;\n \n if (dLen > 0 && cfwrite(data, dLen, ctx->dataFH) != dLen)\n+ {\n+ if (errno == 0)\n+ errno = ENOSPC;\n fatal(\"could not write to output file: %s\",\n get_cfp_error(ctx->dataFH));\n+ }\n }\n\n\nPS. Due to $UserError, I originally sent this message with inaccurate RFC822\nheaders..\n\n\n",
"msg_date": "Thu, 11 Jun 2020 10:37:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg_dump, gzwrite, and errno"
},
{
"msg_contents": "On 2020-Jun-11, Justin Pryzby wrote:\n\n> --- a/src/bin/pg_dump/pg_backup_directory.c\n> +++ b/src/bin/pg_dump/pg_backup_directory.c\n> @@ -347,8 +347,12 @@ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)\n> lclContext *ctx = (lclContext *) AH->formatData;\n> \n> if (dLen > 0 && cfwrite(data, dLen, ctx->dataFH) != dLen)\n> + {\n> + if (errno == 0)\n> + errno = ENOSPC;\n> fatal(\"could not write to output file: %s\",\n> get_cfp_error(ctx->dataFH));\n> + }\n> }\n\nThis seems correct to me. (I spent a long time looking at zlib sources\nto convince myself that it does work with compressed files too). There\nare more calls to cfwrite in pg_backup_directory.c though -- we should\npatch them all.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 18 Jun 2020 17:20:27 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump, gzwrite, and errno"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jun-11, Justin Pryzby wrote:\n>> --- a/src/bin/pg_dump/pg_backup_directory.c\n>> +++ b/src/bin/pg_dump/pg_backup_directory.c\n>> @@ -347,8 +347,12 @@ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)\n>> lclContext *ctx = (lclContext *) AH->formatData;\n>> \n>> if (dLen > 0 && cfwrite(data, dLen, ctx->dataFH) != dLen)\n>> + {\n>> + if (errno == 0)\n>> + errno = ENOSPC;\n>> fatal(\"could not write to output file: %s\",\n>> get_cfp_error(ctx->dataFH));\n>> + }\n>> }\n\n> This seems correct to me.\n\nSurely it's insufficient as-is, because there is no reason to suppose\nthat errno is zero at entry. You'd need to set errno = 0 first.\n\nAlso it's fairly customary in our sources to include a comment about\nthis machination; so the full ritual is usually more like\n\n errno = 0;\n if (pg_pwrite(fd, data, len, xlrec->offset) != len)\n {\n /* if write didn't set errno, assume problem is no disk space */\n if (errno == 0)\n errno = ENOSPC;\n ereport ...\n\n> (I spent a long time looking at zlib sources\n> to convince myself that it does work with compressed files too)\n\nYeah, it's not obvious that gzwrite has the same behavior w.r.t. errno\nas a plain write. But there's not much we can do to improve matters\nif it does not, so we might as well assume it does.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 17:30:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump, gzwrite, and errno"
},
{
"msg_contents": "On 2020-Jun-18, Tom Lane wrote:\n\n> Surely it's insufficient as-is, because there is no reason to suppose\n> that errno is zero at entry. You'd need to set errno = 0 first.\n\nOh, right.\n\n> Also it's fairly customary in our sources to include a comment about\n> this machination; so the full ritual is usually more like\n\nYeah, I had that in my local copy. Done like that in all the most\nobvious places. But there are more places that are still wrong: I\nbelieve every single place that calls WRITE_ERROR_EXIT is doing the\nwrong thing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 17:30:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump, gzwrite, and errno"
}
] |
[
{
"msg_contents": "Before v12, stddev_pop() had the following behavior with just a\nsingle input value:\n\nregression=# SELECT stddev_pop('42'::float8);\n stddev_pop \n------------\n 0\n(1 row)\n\nregression=# SELECT stddev_pop('inf'::float8);\n stddev_pop \n------------\n NaN\n(1 row)\n\nregression=# SELECT stddev_pop('nan'::float8);\n stddev_pop \n------------\n NaN\n(1 row)\n\nAs of v12, though, all three cases produce 0. I am not sure what\nto think about that with respect to an infinity input, but I'm\nquite sure I don't like it for NaN input.\n\nIt looks like the culprit is the introduction of the \"Youngs-Cramer\"\nalgorithm in float8_accum: nothing is done to Sxx at the first iteration,\neven if the input is inf or NaN. I'd be inclined to force Sxx to NaN\nwhen the first input is NaN, and perhaps also when it's Inf.\nAlternatively we could clean up in the finalization routine by noting\nthat Sx is Inf/NaN, but that seems messier. Thoughts?\n\n(I came across this by noting that the results don't agree with\nnumeric accumulation, which isn't using Youngs-Cramer.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 14:06:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Definitional issue: stddev_pop (and related) for 1 input"
},
{
"msg_contents": "I wrote:\n> Before v12, stddev_pop() had the following behavior with just a\n> single input value:\n> ...\n> As of v12, though, all three cases produce 0. I am not sure what\n> to think about that with respect to an infinity input, but I'm\n> quite sure I don't like it for NaN input.\n\nWhile I'm still not sure whether there's an academic argument that\nzero is a reasonable stddev value for a single input that is Inf,\nit seems to me that backwards compatibility is a sufficient reason\nfor going back to producing NaN for that.\n\nHence, attached are some proposed patches. 0001 just adds test\ncases demonstrating the current behavior; then 0002 makes the\nproposed code change. It's easy to check that the test case results\nafter 0002 match what v11 produces.\n\n0003 deals with a different problem which I noted in [1]: the numeric\nvariants of var_samp and stddev_samp also do the wrong thing for a\nsingle special input. Their disease is that they produce NaN for a\nsingle NaN input, where it seems more sensible to produce NULL.\nAt least, NULL is what we get for the same case with the float\naggregates, so we have to change one or the other set of functions\nif we want consistency.\n\nI propose back-patching 0001/0002 as far as v12, since the failure\nto match the old outputs seems like a pretty clear bug/regression.\nHowever, I'd be content to apply 0003 only to HEAD. That misbehavior\nis very ancient, and the lack of complaints suggests that few people\nreally care about this fine point.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/606717.1591924582%40sss.pgh.pa.us",
"msg_date": "Fri, 12 Jun 2020 15:53:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Definitional issue: stddev_pop (and related) for 1 input"
},
{
"msg_contents": "On Fri, 12 Jun 2020 at 20:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Before v12, stddev_pop() had the following behavior with just a\n> > single input value:\n> > ...\n> > As of v12, though, all three cases produce 0. I am not sure what\n> > to think about that with respect to an infinity input, but I'm\n> > quite sure I don't like it for NaN input.\n>\n> While I'm still not sure whether there's an academic argument that\n> zero is a reasonable stddev value for a single input that is Inf,\n> it seems to me that backwards compatibility is a sufficient reason\n> for going back to producing NaN for that.\n\nYeah, it was an oversight, not considering that case. I think the\nacademic argument could equally well be made that the result should be\nNaN if any input is Inf or NaN, even if there's only one input (it's\neffectively \"Inf - Inf\" or \"NaN - NaN\"), so I agree that backwards\ncompatibility clinches it.\n\n> Hence, attached are some proposed patches. 0001 just adds test\n> cases demonstrating the current behavior; then 0002 makes the\n> proposed code change. It's easy to check that the test case results\n> after 0002 match what v11 produces.\n\nThose both look reasonable to me.\n\n> 0003 deals with a different problem which I noted in [1]: the numeric\n> variants of var_samp and stddev_samp also do the wrong thing for a\n> single special input. Their disease is that they produce NaN for a\n> single NaN input, where it seems more sensible to produce NULL.\n> At least, NULL is what we get for the same case with the float\n> aggregates, so we have to change one or the other set of functions\n> if we want consistency.\n\nHmm, yeah it's a bit annoying that they're different. NULL seems like\nthe more logical result -- sample standard deviation isn't defined for\na sample of 1, so why should it be different if that one value is NaN.\n\nThe patch looks reasonable, except I wonder if all compilers are smart\nenough to realise that totCount is always initialised.\n\n> I propose back-patching 0001/0002 as far as v12, since the failure\n> to match the old outputs seems like a pretty clear bug/regression.\n> However, I'd be content to apply 0003 only to HEAD. That misbehavior\n> is very ancient, and the lack of complaints suggests that few people\n> really care about this fine point.\n\nMakes sense.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 13 Jun 2020 11:06:12 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Definitional issue: stddev_pop (and related) for 1 input"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> The patch looks reasonable, except I wonder if all compilers are smart\n> enough to realise that totCount is always initialised.\n\nI think they should be, since that if-block ends with a return;\nthe only way to get to the use of totCount is for both parts of the\nfirst if-condition to be executed.\n\nIn any case, we do have an effective policy of ignoring\nuninitialized-variable warnings from very old/stupid compilers.\nlocust and prairiedog, which I think use the same ancient gcc\nversion, emit a couple dozen such warnings. If they are the only\nones that complain about this new code, I'll not worry.\n\nThanks for looking at the patch!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Jun 2020 11:56:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Definitional issue: stddev_pop (and related) for 1 input"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems to me that we're making the same mistake with the replication\nparser that we've made in various placesin the regular parser: using a\nsyntax for options that requires that every potential option be a\nkeyword, and every potential option requires modification of the\ngrammar. In particular, the syntax for the BASE_BACKUP command has\naccreted a whole lot of cruft already and I think that trend is likely\nto continue. I don't think that trying to keep people from adding\noptions is a good strategy, so instead I'd like to have a better way\nto do it. Attached is v1 of a patch to refactor things so that parts\nof the BASE_BACKUP and CREATE_REPLICATION_SLOT are replaced with a\nflexible options syntax. There are some debatable decisions here, so\nI'd be happy to get some feedback on whether to go further with this,\nor less far, or maybe even just abandon the idea altogether. I doubt\nthe last one is the right course, though: ISTM something's got to be\ndone about the BASE_BACKUP case, at least.\n\nThis version of the patch does not include documentation, but\nhopefully it's fairly clear from reading the code what it's all about.\nIf people agree with the basic approach, I'll write docs. The\nintention is that we'd continue to support the existing syntax for the\nexisting options, but the client tools would be adjusted to use the\nnew syntax if the server's new enough, and any new options would be\nsupported only through the new syntax. At some point in the distant\nfuture we could retire the old syntax, when we've stopped caring about\ncompatibility with pre-14 versions.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 11 Jun 2020 16:40:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "extensible options syntax for replication parser?"
},
{
"msg_contents": "Hello Robert,\n\nMy 0.02 €:\n\n> It seems to me that we're making the same mistake with the replication\n> parser that we've made in various placesin the regular parser: using a\n> syntax for options that requires that every potential option be a\n> keyword, and every potential option requires modification of the\n> grammar. In particular, the syntax for the BASE_BACKUP command has\n> accreted a whole lot of cruft already and I think that trend is likely\n> to continue. I don't think that trying to keep people from adding\n> options is a good strategy,\n\nIndeed.\n\n> so instead I'd like to have a better way to do it.\n\n> Attached is v1 of a patch to refactor things so that parts of the \n> BASE_BACKUP and CREATE_REPLICATION_SLOT are replaced with a flexible \n> options syntax.\n\nPatch applies cleanly, however compilation fails on:\n\n repl_gram.y:271:106: error: expected ‘;’ before ‘}’\n\nGetting rid of \"ident_or_keyword\", some day, would be a relief.\n\nFor boolean options, you are generating (EXPORT_SNAPSHOT TRUE) where\n(EXPORT_SNAPSHOT) would do.\n\nMaybe allowing (!EXPORT_SNAPSHOT) for (FOO FALSE) would be nice, if it is \nnot allowed yet. That would also allow to get rid of FOO/NOFOO variants if \nany for FOO & !FOO, so one keyword is enough for a concept.\n\n> There are some debatable decisions here, so I'd be happy to get some \n> feedback on whether to go further with this, or less far, or maybe even \n> just abandon the idea altogether. I doubt the last one is the right \n> course, though: ISTM something's got to be done about the BASE_BACKUP \n> case, at least.\n\nISTM that it would be better to generalize the approach to all commands \nwhich accept options, so that the syntax is homogeneous.\n\n> If people agree with the basic approach, I'll write docs. The\n> intention is that we'd continue to support the existing syntax for the\n> existing options, but the client tools would be adjusted to use the\n> new syntax if the server's new enough, and any new options would be\n> supported only through the new syntax.\n\nYes.\n\n> At some point in the distant future we could retire the old syntax, when \n> we've stopped caring about compatibility with pre-14 versions.\n\nJust wondering: ISTM that the patch implies that dumping a v14 db \ngenerates the new syntax, which makes sense. Now I see 4 use cases wrt to \nversion.\n\n # source target comment\n 1 v < 14 v < 14 probably the dump would use one of the older version\n 2 v < 14 v >= 14 upgrade\n 3 v >= 14 v < 14 downgrade: oops, the output uses the new syntax\n 4 v >= 14 v >= 14 ok\n\nBoth cross version usages may be legitimate. In particular, 3 (oops, \nhardware issue, I have to move the db to a server where pg has not been \nupgraded) seems not possible because the generated syntax uses the new \napproach. Should/could there be some option to tell \"please generate vXXX \nsyntax\" to allow that?\n\n-- \nFabien.",
"msg_date": "Sun, 14 Jun 2020 09:15:37 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Sun, Jun 14, 2020 at 3:15 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > so instead I'd like to have a better way to do it.\n>\n> > Attached is v1 of a patch to refactor things so that parts of the\n> > BASE_BACKUP and CREATE_REPLICATION_SLOT are replaced with a flexible\n> > options syntax.\n>\n> Patch applies cleanly, however compilation fails on:\n>\n> repl_gram.y:271:106: error: expected ‘;’ before ‘}’\n\nOops. I'm surprised my compiler didn't complain.\n\n> Getting rid of \"ident_or_keyword\", some day, would be a relief.\n\nActually, I think that particular thing is a sign that things are\nbeing done correctly. You need something like that if you have\ncontexts where you want to treat keywords and non-keywords the same\nway, and it's generally good to have such places. In fact, this could\nprobably be profitably used in more places in the replication grammar.\n\n> For boolean options, you are generating (EXPORT_SNAPSHOT TRUE) where\n> (EXPORT_SNAPSHOT) would do.\n\nTrue, but it doesn't seem to matter much one way or the other. I\nthought this way looked a little clearer.\n\n> Maybe allowing (!EXPORT_SNAPSHOT) for (FOO FALSE) would be nice, if it is\n> not allowed yet. That would also allow to get rid of FOO/NOFOO variants if\n> any for FOO & !FOO, so one keyword is enough for a concept.\n\nWell, the goal for this is not to need ANY new keywords for a new\nconcept, but !FOO would be inconsistent with other places where we do\nsimilar things (e.g. EXPLAIN, VACUUM, COPY) so I don't think that's\nthe way to go.\n\n> > There are some debatable decisions here, so I'd be happy to get some\n> > feedback on whether to go further with this, or less far, or maybe even\n> > just abandon the idea altogether. I doubt the last one is the right\n> > course, though: ISTM something's got to be done about the BASE_BACKUP\n> > case, at least.\n>\n> ISTM that it would be better to generalize the approach to all commands\n> which accept options, so that the syntax is homogeneous.\n\nAs a general principle, sure, but it's always debatable how far to\ntake things in particular cases. For instance, in the cases of\nEXPLAIN, VACUUM, and COPY, the relation name is given as a dedicated\npiece of syntax, not an option. It could be given as an option, but\nsince it's mandatory and important, we didn't. In the case of COPY,\nthe source file is also specified via dedicated syntax, rather than an\noption. So we have to make the same kinds of decisions here. For\nexample, for CREATE_REPLICATION_SLOT, one could argue that PHYSICAL\nand LOGICAL should be moved to the extensible options list instead of\nbeing kept as separate syntax. However, that seems somewhat\ninconsistent with the long-standing syntax for START_REPLICATION,\nwhich already does use extensible options:\n\nSTART_REPLICATION SLOT slot_name LOGICAL XXX/XXX [ ( option_name\n[option_value] [, ... ] ) ]\n\nOn balance I am comfortable with what the patch does, but other people\nmight have a different take.\n\n> Just wondering: ISTM that the patch implies that dumping a v14 db\n> generates the new syntax, which makes sense. Now I see 4 use cases wrt to\n> version.\n>\n> # source target comment\n> 1 v < 14 v < 14 probably the dump would use one of the older version\n> 2 v < 14 v >= 14 upgrade\n> 3 v >= 14 v < 14 downgrade: oops, the output uses the new syntax\n> 4 v >= 14 v >= 14 ok\n>\n> Both cross version usages may be legitimate. In particular, 3 (oops,\n> hardware issue, I have to move the db to a server where pg has not been\n> upgraded) seems not possible because the generated syntax uses the new\n> approach. Should/could there be some option to tell \"please generate vXXX\n> syntax\" to allow that?\n\nI don't think dumping a DB is really affected by any of this. AFAIK,\nreplication commands aren't used in pg_dump output. It just affects\npg_basebackup and the server, and you'll notice that I have taken\npains to allow the server to continue to accept the current format,\nand to allow pg_basebackup to generate that format when talking to an\nolder server.\n\nThanks for the review. v2 attached, hopefully fixing the compilation\nissue you mentioned.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 24 Jun 2020 11:51:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Wed, Jun 24, 2020 at 11:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Thanks for the review. v2 attached, hopefully fixing the compilation\n> issue you mentioned.\n\nTushar Ahuja reported to me off-list that my basebackup refactoring\npatch set was changing whether or not the following message appeared:\n\nNOTICE: WAL archiving is not enabled; you must ensure that all\nrequired WAL segments are copied through other means to complete the\nbackup\n\nThat patch set includes this patch, and the reason for the behavior\ndifference turned out to be that I had gotten an if-test that is part\nof this patch backwards. Here is v3, fixing that. It is a little\ndisappointing that this mistake didn't cause any existing regression\ntests to fail.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 8 Oct 2020 10:33:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Thu, Oct 8, 2020 at 10:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> That patch set includes this patch, and the reason for the behavior\n> difference turned out to be that I had gotten an if-test that is part\n> of this patch backwards. Here is v3, fixing that. It is a little\n> disappointing that this mistake didn't cause any existing regression\n> tests to fail.\n\nI'm returning to this topic ~11 months later with a more definite\nintent to get something committed, since my \"refactoring basebackup.c\"\npatch set - that also adds server-side compression and server-side\nbackup - needs to add more options to BASE_BACKUP, and doubling down\non the present options-parsing strategy seems like a horrible idea.\nI've now split this into two patches, one for BASE_BACKUP, and the\nother for CREATE_REPLICATION_SLOT. I've rebased the patches and added\ndocumentation as well. The CREATE_REPLICATION_SLOT patch now unifies\nEXPORT_SNAPSHOT, NOEXPORT_SNAPSHOT, and USE_SNAPSHOT, which are\nmutually exclusive choices, into SNAPSHOT { 'export' | 'use' |\n'nothing' } which IMHO is clearer.\n\nLast call for complaints about either the overall direction or the\nspecific implementation choices...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 10 Sep 2021 15:44:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 3:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Last call for complaints about either the overall direction or the\n> specific implementation choices...\n\nA complaint showed up over at\nhttp://postgr.es/m/979131631633278@mail.yandex.ru and pursuant to that\ncomplaint I have made the new syntax for controlling the checkpoint\ntype look like CHECKPOINT { 'fast' | 'spread' } rather than just\nhaving an option called FAST. It was suggested over there to also\nrename WAIT to WAIT_WAL_ARCHIVED, but I don't like that for reasons\nexplained on that thread and so have not adopted that proposal.\n\nSergei also helpfully pointed out that I'd accidentally deleted a\nversion check in one place, so this version is also updated to not do\nthat.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 21 Sep 2021 12:27:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "Hello\n\nThanks, I missed this thread.\n\n> + <term><literal>CHECKPOINT { 'fast' | 'spread' }</replaceable></literal></term>\n\nUnpaired </replaceable> tag in docs.\n\nThat was all I noticed in 0001. Still not sure where is the difference between \"change NOWAIT to WAIT\" and \"change NOWAIT to something else descriptive\". But fine, I can live with WAIT. (one note: the exact command is visible to the user when log_replication_commands is enabled, not completely hidden)\n\n0002 breaks \"create subscription (with create_slot true)\" when the publish server is an earlier version:\n\npostgres=# create subscription test CONNECTION 'host=127.0.0.1 user=postgres' PUBLICATION test with (create_slot = true);\nERROR: could not create replication slot \"test\": ERROR: syntax error\n\nregards, Sergei\n\n\n",
"msg_date": "Wed, 22 Sep 2021 15:11:10 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 8:11 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> > + <term><literal>CHECKPOINT { 'fast' | 'spread' }</replaceable></literal></term>\n>\n> Unpaired </replaceable> tag in docs.\n>\n> That was all I noticed in 0001. Still not sure where is the difference between \"change NOWAIT to WAIT\" and \"change NOWAIT to something else descriptive\". But fine, I can live with WAIT. (one note: the exact command is visible to the user when log_replication_commands is enabled, not completely hidden)\n>\n> 0002 breaks \"create subscription (with create_slot true)\" when the publish server is an earlier version:\n>\n> postgres=# create subscription test CONNECTION 'host=127.0.0.1 user=postgres' PUBLICATION test with (create_slot = true);\n> ERROR: could not create replication slot \"test\": ERROR: syntax error\n\nThanks. I have attempted to fix these problems in the attached version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 22 Sep 2021 15:55:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On 9/23/21 1:25 AM, Robert Haas wrote:\n>> postgres=# create subscription test CONNECTION 'host=127.0.0.1 user=postgres' PUBLICATION test with (create_slot = true);\n>> ERROR: could not create replication slot \"test\": ERROR: syntax error\n> Thanks. I have attempted to fix these problems in the attached version.\n\nl checked and look like the issue is still not fixed against v7-* patches -\n\npostgres=# create subscription test CONNECTION 'host=127.0.0.1 user=edb \ndbname=postgres' PUBLICATION p with (create_slot = true);\nERROR: could not create replication slot \"test\": ERROR: syntax error\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 9/23/21 1:25 AM, Robert Haas wrote:\n\n\n\npostgres=# create subscription test CONNECTION 'host=127.0.0.1 user=postgres' PUBLICATION test with (create_slot = true);\nERROR: could not create replication slot \"test\": ERROR: syntax error\n\n\nThanks. I have attempted to fix these problems in the attached version.\n\nl checked and look like the issue is still not fixed against v7-*\n patches -\n\npostgres=# create subscription test CONNECTION 'host=127.0.0.1\n user=edb dbname=postgres' PUBLICATION p with (create_slot = true);\n ERROR: could not create replication slot \"test\": ERROR: syntax\n error\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 23 Sep 2021 12:24:57 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 2:55 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> l checked and look like the issue is still not fixed against v7-* patches -\n>\n> postgres=# create subscription test CONNECTION 'host=127.0.0.1 user=edb dbname=postgres' PUBLICATION p with (create_slot = true);\n> ERROR: could not create replication slot \"test\": ERROR: syntax error\n\nThanks. Looks like that version had some stupid mistakes. Here's a new one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Sep 2021 11:05:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "\n\nOn 2021/09/24 0:05, Robert Haas wrote:\n> On Thu, Sep 23, 2021 at 2:55 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>> l checked and look like the issue is still not fixed against v7-* patches -\n>>\n>> postgres=# create subscription test CONNECTION 'host=127.0.0.1 user=edb dbname=postgres' PUBLICATION p with (create_slot = true);\n>> ERROR: could not create replication slot \"test\": ERROR: syntax error\n> \n> Thanks. Looks like that version had some stupid mistakes. Here's a new one.\n\n- <indexterm><primary>BASE_BACKUP</primary></indexterm>\n+ <term><literal>BASE_BACKUP</literal> [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ]\n\nYou seem to accidentally remove the index term for BASE_BACKUP.\n\n+ident_or_keyword:\n+\t\t\tIDENT\t\t\t\t\t\t\t{ $$ = $1; }\n\nident_or_keyword seems to be used only for generic options,\nbut it includes the keywords for legacy options like \"FAST\".\nIsn't it better to remove the keywords for legacy options from\nident_or_keyword?\n\nOTOH, the keywords for newly-introduced generic options like\nCHECKPOINT should be defined in repl_scanner.l and repl_gram.y?\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 24 Sep 2021 13:01:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On 9/23/21 8:35 PM, Robert Haas wrote:\n> Thanks. Looks like that version had some stupid mistakes. Here's a new one.\n\nThanks, the reported issue seems to be fixed now for HEAD w/patch \n(publication) to HEAD w/patch (subscription) but still getting the same \nerror if we try to perform v12(publication) to HEAD \nw/patch(subscription) . I checked there is no such issue for \nv12(publication) to v14 RC1 (subscription)\n\npostgres=# create subscription sub123s CONNECTION 'host=127.0.0.1 \nuser=edb port=4444 dbname=postgres' PUBLICATION pp with (slot_name = \nfrom_v14);\nERROR: could not create replication slot \"from_v14\": ERROR: syntax error\npostgres=#\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Fri, 24 Sep 2021 16:58:06 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 12:01 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> You seem to accidentally remove the index term for BASE_BACKUP.\n\nGood catch.\n\n> +ident_or_keyword:\n> + IDENT { $$ = $1; }\n>\n> ident_or_keyword seems to be used only for generic options,\n> but it includes the keywords for legacy options like \"FAST\".\n> Isn't it better to remove the keywords for legacy options from\n> ident_or_keyword?\n\nI don't think so. I mean, if we do, then it's not really an\nident_or_keyword production any more, because it would only allow some\nkeywords, not all. Now if the keywords that are not included aren't\nused as options anywhere then it won't matter, but it seems cleaner to\nme to make the list complete.\n\n> OTOH, the keywords for newly-introduced generic options like\n> CHECKPOINT should be defined in repl_scanner.l and repl_gram.y?\n\nOne big advantage of this approach is that we don't need to make\nchanges to those files in order to add new options, so I feel like\nthat would be missing the point completely.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 Sep 2021 12:55:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 7:28 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 9/23/21 8:35 PM, Robert Haas wrote:\n> > Thanks. Looks like that version had some stupid mistakes. Here's a new one.\n>\n> Thanks, the reported issue seems to be fixed now for HEAD w/patch\n> (publication) to HEAD w/patch (subscription) but still getting the same\n> error if we try to perform v12(publication) to HEAD\n> w/patch(subscription) . I checked there is no such issue for\n> v12(publication) to v14 RC1 (subscription)\n>\n> postgres=# create subscription sub123s CONNECTION 'host=127.0.0.1\n> user=edb port=4444 dbname=postgres' PUBLICATION pp with (slot_name =\n> from_v14);\n> ERROR: could not create replication slot \"from_v14\": ERROR: syntax error\n> postgres=#\n\nI am not able to reproduce this failure. I suspect you made a mistake\nin testing, because my test case before sending the patch was\nbasically the same as yours, except that I was testing with v13. But I\ntried again with v12 and it seems fine:\n\n[rhaas pgsql]$ createdb -p 5412\n[rhaas pgsql]$ psql -c 'select version()' -p 5412\n version\n----------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.3 on x86_64-apple-darwin19.4.0, compiled by clang\nversion 5.0.2 (tags/RELEASE_502/final), 64-bit\n(1 row)\n[rhaas pgsql]$ psql\npsql (15devel)\nType \"help\" for help.\n\nrhaas=# create subscription sub123s CONNECTION 'port=5412' PUBLICATION\npp with (slot_name =\nfrom_v14);\nNOTICE: created replication slot \"from_v14\" on publisher\nCREATE SUBSCRIPTION\n\nHere's v9, fixing the issue reported by Fujii Masao.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 24 Sep 2021 13:06:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On 9/24/21 10:36 PM, Robert Haas wrote:\n> I am not able to reproduce this failure. I suspect you made a mistake\n> in testing, because my test case before sending the patch was\n> basically the same as yours, except that I was testing with v13. But I\n> tried again with v12 and it seems fine:\n\nRight, on a clean setup -I am not also not able to reproduce this issue. \nThanks for checking at your end.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Fri, 24 Sep 2021 23:24:27 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On 9/24/21 10:36 PM, Robert Haas wrote:\n> Here's v9, fixing the issue reported by Fujii Masao.\n\nPlease refer this scenario where publication on v14RC1 and subscription \non HEAD (w/patch)\n\n--create a subscription with parameter two_phase=1 on HEAD\n\npostgres=# CREATE SUBSCRIPTION r1015 CONNECTION 'dbname=postgres \nhost=localhost port=5454' PUBLICATION p WITH (two_phase=1);\nNOTICE: created replication slot \"r1015\" on publisher\nCREATE SUBSCRIPTION\npostgres=#\n\n--check on 14RC1\n\npostgres=# select two_phase from pg_replication_slots where \nslot_name='r105';\n two_phase\n-----------\n f\n(1 row)\n\nso are we silently ignoring this parameter as it is not supported on \nv14RC/HEAD ? and if yes then why not we just throw an message like\nERROR: unrecognized subscription parameter: \"two_phase\"\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Fri, 24 Sep 2021 23:57:59 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On 9/24/21 11:57 PM, tushar wrote:\n> postgres=# select two_phase from pg_replication_slots where \n> slot_name='r105'; \n\nCorrection -Please read 'r105' to 'r1015'\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Sat, 25 Sep 2021 00:08:19 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 2:28 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> Please refer this scenario where publication on v14RC1 and subscription\n> on HEAD (w/patch)\n>\n> --create a subscription with parameter two_phase=1 on HEAD\n>\n> postgres=# CREATE SUBSCRIPTION r1015 CONNECTION 'dbname=postgres\n> host=localhost port=5454' PUBLICATION p WITH (two_phase=1);\n> NOTICE: created replication slot \"r1015\" on publisher\n> CREATE SUBSCRIPTION\n> postgres=#\n>\n> --check on 14RC1\n>\n> postgres=# select two_phase from pg_replication_slots where\n> slot_name='r105';\n> two_phase\n> -----------\n> f\n> (1 row)\n>\n> so are we silently ignoring this parameter as it is not supported on\n> v14RC/HEAD ? and if yes then why not we just throw an message like\n> ERROR: unrecognized subscription parameter: \"two_phase\"\n\ntwo_phase is new in v15, something you could also find out by checking\nthe documentation. Now if the patch changes the way two_phase\ninteracts with older versions, that's a bug in the patch and we should\nfix it. But if the same issue exists without the patch then I'm not\nsure why you are raising it here rather than on the thread where that\nfeature was developed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 Sep 2021 14:38:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Sat, Sep 25, 2021 at 4:28 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>\n> On 9/24/21 10:36 PM, Robert Haas wrote:\n> > Here's v9, fixing the issue reported by Fujii Masao.\n>\n> Please refer this scenario where publication on v14RC1 and subscription\n> on HEAD (w/patch)\n>\n> --create a subscription with parameter two_phase=1 on HEAD\n>\n> postgres=# CREATE SUBSCRIPTION r1015 CONNECTION 'dbname=postgres\n> host=localhost port=5454' PUBLICATION p WITH (two_phase=1);\n> NOTICE: created replication slot \"r1015\" on publisher\n> CREATE SUBSCRIPTION\n> postgres=#\n>\n> --check on 14RC1\n>\n> postgres=# select two_phase from pg_replication_slots where\n> slot_name='r105';\n> two_phase\n> -----------\n> f\n> (1 row)\n>\n> so are we silently ignoring this parameter as it is not supported on\n> v14RC/HEAD ? and if yes then why not we just throw an message like\n> ERROR: unrecognized subscription parameter: \"two_phase\"\n>\n> --\n\nThere is usually a time lag between a subscription created with two_phase on and\nthe slot on the publisher enabling two_phase. It only happens after a\ntablesync is completed and\nthe apply worker is restarted. There are logs which indicate that this\nhas happened. If you could share the\nlogs (on publisher and subscriber) when this happens, I could have a look.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 27 Sep 2021 11:20:12 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 11:20 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Sat, Sep 25, 2021 at 4:28 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> >\n> > On 9/24/21 10:36 PM, Robert Haas wrote:\n> > > Here's v9, fixing the issue reported by Fujii Masao.\n> >\n> > Please refer this scenario where publication on v14RC1 and subscription\n> > on HEAD (w/patch)\n> >\n> > --create a subscription with parameter two_phase=1 on HEAD\n> >\n> > postgres=# CREATE SUBSCRIPTION r1015 CONNECTION 'dbname=postgres\n> > host=localhost port=5454' PUBLICATION p WITH (two_phase=1);\n> > NOTICE: created replication slot \"r1015\" on publisher\n> > CREATE SUBSCRIPTION\n> > postgres=#\n> >\n> > --check on 14RC1\n> >\n> > postgres=# select two_phase from pg_replication_slots where\n> > slot_name='r105';\n> > two_phase\n> > -----------\n> > f\n> > (1 row)\n> >\n> > so are we silently ignoring this parameter as it is not supported on\n> > v14RC/HEAD ? and if yes then why not we just throw an message like\n> > ERROR: unrecognized subscription parameter: \"two_phase\"\n> >\n> > --\n>\n> There is usually a time lag between a subscription created with two_phase on and\n> the slot on the publisher enabling two_phase. It only happens after a\n> tablesync is completed and\n> the apply worker is restarted. There are logs which indicate that this\n> has happened. If you could share the\n> logs (on publisher and subscriber) when this happens, I could have a look.\n>\n\nAnd in case you do see a problem, I request you create a seperate\nthread for this. I didn't want to derail this patch.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 27 Sep 2021 13:59:46 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On 9/25/21 12:08 AM, Robert Haas wrote:\n> two_phase is new in v15, something you could also find out by checking\n> the documentation. Now if the patch changes the way two_phase\n> interacts with older versions, that's a bug in the patch and we should\n> fix it. But if the same issue exists without the patch then I'm not\n> sure why you are raising it here rather than on the thread where that\n> feature was developed.\n\nRight, issue is reproducible on HEAD as well. I should have checked \nthat, sorry about it.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Mon, 27 Sep 2021 12:41:14 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On 9/27/21 9:29 AM, Ajin Cherian wrote:\n> And in case you do see a problem, I request you create a seperate\n> thread for this. I didn't want to derail this patch.\n\nIt would be great if we throw an error rather than silently ignoring \nthis parameter , I opened a separate email for this\n\nhttps://www.postgresql.org/message-id/CAC6VRoY3SAFeO7kZ0EOVC6mX%3D1ZyTocaecTDTh209W20KCC_aQ%40mail.gmail.com\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Mon, 27 Sep 2021 12:45:12 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 3:15 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> It would be great if we throw an error rather than silently ignoring\n> this parameter , I opened a separate email for this\n>\n> https://www.postgresql.org/message-id/CAC6VRoY3SAFeO7kZ0EOVC6mX%3D1ZyTocaecTDTh209W20KCC_aQ%40mail.gmail.com\n\nHearing no further comments, I've gone ahead and committed these\npatches. I'm still slightly nervous that I may have missed some issue,\nbut I think at this point having the patches in the tree is more\nlikely to turn it up than any other course of action.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:56:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensible options syntax for replication parser?"
},
{
"msg_contents": "On 10/5/21 10:26 PM, Robert Haas wrote:\n> Hearing no further comments, I've gone ahead and committed these\n> patches. I'm still slightly nervous that I may have missed some issue,\n> but I think at this point having the patches in the tree is more\n> likely to turn it up than any other course of action.\nI have tested couple of scenarios of pg_basebackup / pg_receivewal \n/pg_recvlogical / Publication(wal_level=logical) and\nSubscription e.t.c against HEAD (with patches) and cross-version \ntesting. Things look good to me and no breakage was found.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Wed, 13 Oct 2021 19:35:47 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extensible options syntax for replication parser?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've heard from a few people that building PostgreSQL extensions on\nWindows is a bit of a pain. I've heard from these people that their\nsolution was to temporarily add their extension as a contrib module\nand have the extension building code take care of creating and\nbuilding the Visual Studio project file.\n\nI also have to say, I do often use Visual Studio myself for PostgreSQL\ndevelopment, but when it comes to testing something with an extension,\nI've always avoided the problem and moved over to Linux.\n\nI thought about how we might improve this situation. The easiest way\nI could think to do this was to just reuse the code that builds the\nVisual Studio project files for contrib modules and write a Perl\nscript which calls those functions. Now, these functions, for those\nwho have never looked there before, they do use the PGXS compatible\nMakefile as a source of truth and build the VS project file from that.\nI've attached a very rough PoC patch which attempts to do this.\n\nThe script just takes the directory of the Makefile as the first\nargument, and optionally the path to pg_config.exe as the 2nd\nargument. If that happens to be in the PATH environment variable then\nthat can be left out.\n\nYou end up with:\n\nX:\\pg_src\\src\\tools\\msvc>perl extbuild.pl\nX:\\pg_src\\contrib\\auto_explain X:\\pg\\bin\nMakefile dir = X:\\pg_src\\contrib\\auto_explain\nPostgres include dir = X:\\pg\\include\nBuilding = Release\nDetected hardware platform: x64\n\n...\n\nBuild succeeded.\n 0 Warning(s)\n 0 Error(s)\n\nTime Elapsed 00:00:01.13\n\nFor now, I've only tested this on a few contrib modules. It does need\nmore work to properly build ones with a list of \"MODULES\" in the\nMakefile. It seems to work ok on the MODULE_big ones that I tested. It\nneeds a bit more work to get the resource file paths working properly\nfor PROGRAM.\n\nBefore I go and invest more time in this, I'd like to get community\nfeedback about the idea. Is this something that we'd want? Does it\nseem maintainable enough to have in core? Is there a better way to do\nit?\n\nDavid",
"msg_date": "Fri, 12 Jun 2020 10:42:45 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Building PostgreSQL extensions on Windows"
},
{
"msg_contents": "On 6/11/20 6:42 PM, David Rowley wrote:\n> I've heard from a few people that building PostgreSQL extensions on\n> Windows is a bit of a pain. I've heard from these people that their\n> solution was to temporarily add their extension as a contrib module\n> and have the extension building code take care of creating and\n> building the Visual Studio project file.\n\n\nYep -- that is exactly how I have been building PL/R on Windows for many years,\nand it is painful.\n\n\n> I thought about how we might improve this situation. The easiest way\n> I could think to do this was to just reuse the code that builds the\n> Visual Studio project files for contrib modules and write a Perl\n> script which calls those functions. Now, these functions, for those\n> who have never looked there before, they do use the PGXS compatible\n> Makefile as a source of truth and build the VS project file from that.\n> I've attached a very rough PoC patch which attempts to do this.\n> \n> The script just takes the directory of the Makefile as the first\n> argument, and optionally the path to pg_config.exe as the 2nd\n> argument. If that happens to be in the PATH environment variable then\n> that can be left out.\n> \n> You end up with:\n> \n> X:\\pg_src\\src\\tools\\msvc>perl extbuild.pl\n> X:\\pg_src\\contrib\\auto_explain X:\\pg\\bin\n> Makefile dir = X:\\pg_src\\contrib\\auto_explain\n> Postgres include dir = X:\\pg\\include\n> Building = Release\n> Detected hardware platform: x64\n> \n> ...\n> \n> Build succeeded.\n> 0 Warning(s)\n> 0 Error(s)\n> \n> Time Elapsed 00:00:01.13\n> \n> For now, I've only tested this on a few contrib modules. It does need\n> more work to properly build ones with a list of \"MODULES\" in the\n> Makefile. It seems to work ok on the MODULE_big ones that I tested. It\n> needs a bit more work to get the resource file paths working properly\n> for PROGRAM.\n> \n> Before I go and invest more time in this, I'd like to get community\n> feedback about the idea. Is this something that we'd want? Does it\n> seem maintainable enough to have in core? Is there a better way to do\n> it?\n\n\nSounds very useful to me -- I'll give it a try with PL/R this weekend.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Fri, 12 Jun 2020 05:59:21 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Building PostgreSQL extensions on Windows"
},
{
"msg_contents": "Hi David,\n\nNo pain with CMake. It's pretty easy to use it in Windows for PostgreSQL\nextensions. Example, https://github.com/dmitigr/pgnso\n\n\nOn Fri, 12 Jun 2020, 01:43 David Rowley, <dgrowleyml@gmail.com> wrote:\n\n> Hi,\n>\n> I've heard from a few people that building PostgreSQL extensions on\n> Windows is a bit of a pain. I've heard from these people that their\n> solution was to temporarily add their extension as a contrib module\n> and have the extension building code take care of creating and\n> building the Visual Studio project file.\n>\n> I also have to say, I do often use Visual Studio myself for PostgreSQL\n> development, but when it comes to testing something with an extension,\n> I've always avoided the problem and moved over to Linux.\n>\n> I thought about how we might improve this situation. The easiest way\n> I could think to do this was to just reuse the code that builds the\n> Visual Studio project files for contrib modules and write a Perl\n> script which calls those functions. Now, these functions, for those\n> who have never looked there before, they do use the PGXS compatible\n> Makefile as a source of truth and build the VS project file from that.\n> I've attached a very rough PoC patch which attempts to do this.\n>\n> The script just takes the directory of the Makefile as the first\n> argument, and optionally the path to pg_config.exe as the 2nd\n> argument. If that happens to be in the PATH environment variable then\n> that can be left out.\n>\n> You end up with:\n>\n> X:\\pg_src\\src\\tools\\msvc>perl extbuild.pl\n> X:\\pg_src\\contrib\\auto_explain X:\\pg\\bin\n> Makefile dir = X:\\pg_src\\contrib\\auto_explain\n> Postgres include dir = X:\\pg\\include\n> Building = Release\n> Detected hardware platform: x64\n>\n> ...\n>\n> Build succeeded.\n> 0 Warning(s)\n> 0 Error(s)\n>\n> Time Elapsed 00:00:01.13\n>\n> For now, I've only tested this on a few contrib modules. It does need\n> more work to properly build ones with a list of \"MODULES\" in the\n> Makefile. It seems to work ok on the MODULE_big ones that I tested. It\n> needs a bit more work to get the resource file paths working properly\n> for PROGRAM.\n>\n> Before I go and invest more time in this, I'd like to get community\n> feedback about the idea. Is this something that we'd want? Does it\n> seem maintainable enough to have in core? Is there a better way to do\n> it?\n>\n> David\n>\n>\n\nHi David,No pain with CMake. It's pretty easy to use it in Windows for PostgreSQL extensions. Example, https://github.com/dmitigr/pgnsoOn Fri, 12 Jun 2020, 01:43 David Rowley, <dgrowleyml@gmail.com> wrote:Hi,\n\nI've heard from a few people that building PostgreSQL extensions on\nWindows is a bit of a pain. I've heard from these people that their\nsolution was to temporarily add their extension as a contrib module\nand have the extension building code take care of creating and\nbuilding the Visual Studio project file.\n\nI also have to say, I do often use Visual Studio myself for PostgreSQL\ndevelopment, but when it comes to testing something with an extension,\nI've always avoided the problem and moved over to Linux.\n\nI thought about how we might improve this situation. The easiest way\nI could think to do this was to just reuse the code that builds the\nVisual Studio project files for contrib modules and write a Perl\nscript which calls those functions. Now, these functions, for those\nwho have never looked there before, they do use the PGXS compatible\nMakefile as a source of truth and build the VS project file from that.\nI've attached a very rough PoC patch which attempts to do this.\n\nThe script just takes the directory of the Makefile as the first\nargument, and optionally the path to pg_config.exe as the 2nd\nargument. If that happens to be in the PATH environment variable then\nthat can be left out.\n\nYou end up with:\n\nX:\\pg_src\\src\\tools\\msvc>perl extbuild.pl\nX:\\pg_src\\contrib\\auto_explain X:\\pg\\bin\nMakefile dir = X:\\pg_src\\contrib\\auto_explain\nPostgres include dir = X:\\pg\\include\nBuilding = Release\nDetected hardware platform: x64\n\n...\n\nBuild succeeded.\n 0 Warning(s)\n 0 Error(s)\n\nTime Elapsed 00:00:01.13\n\nFor now, I've only tested this on a few contrib modules. It does need\nmore work to properly build ones with a list of \"MODULES\" in the\nMakefile. It seems to work ok on the MODULE_big ones that I tested. It\nneeds a bit more work to get the resource file paths working properly\nfor PROGRAM.\n\nBefore I go and invest more time in this, I'd like to get community\nfeedback about the idea. Is this something that we'd want? Does it\nseem maintainable enough to have in core? Is there a better way to do\nit?\n\nDavid",
"msg_date": "Fri, 12 Jun 2020 21:48:18 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Building PostgreSQL extensions on Windows"
}
] |
[
{
"msg_contents": "The POSIX standard says this about the exp(3) function:\n\n\tIf x is -Inf, +0 shall be returned.\n\nAt least on my Linux box, our version does no such thing:\n\nregression=# select exp('-inf'::float8);\nERROR: value out of range: underflow\n\nDoes anyone disagree that that's a bug? Should we back-patch\na fix, or just change it in HEAD? Given the lack of user\ncomplaints, I lean a bit towards the latter, but am not sure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 19:22:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "exp() versus the POSIX standard"
},
{
"msg_contents": "I wrote:\n> The POSIX standard says this about the exp(3) function:\n> \tIf x is -Inf, +0 shall be returned.\n> At least on my Linux box, our version does no such thing:\n> regression=# select exp('-inf'::float8);\n> ERROR: value out of range: underflow\n\nNow that I look, power() has similar issues:\n\nregression=# select power('1.1'::float8, '-inf');\nERROR: value out of range: underflow\nregression=# select power('0.1'::float8, 'inf');\nERROR: value out of range: underflow\nregression=# select power('-inf'::float8, '-3');\nERROR: value out of range: underflow\nregression=# select power('-inf'::float8, '-4');\nERROR: value out of range: underflow\n\ncontradicting POSIX which says\n\nFor |x| > 1, if y is -Inf, +0 shall be returned.\n\nFor |x| < 1, if y is +Inf, +0 shall be returned.\n\nFor y an odd integer < 0, if x is -Inf, -0 shall be returned.\n\nFor y < 0 and not an odd integer, if x is -Inf, +0 shall be returned.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 19:56:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: exp() versus the POSIX standard"
},
{
"msg_contents": "пт, 12 чэр 2020, 02:57 карыстальнік Tom Lane <tgl@sss.pgh.pa.us> напісаў:\n\n> I wrote:\n> > The POSIX standard says this about the exp(3) function:\n> > If x is -Inf, +0 shall be returned.\n> > At least on my Linux box, our version does no such thing:\n> > regression=# select exp('-inf'::float8);\n> > ERROR: value out of range: underflow\n>\n> Now that I look, power() has similar issues:\n>\n> regression=# select power('1.1'::float8, '-inf');\n> ERROR: value out of range: underflow\n> regression=# select power('0.1'::float8, 'inf');\n> ERROR: value out of range: underflow\n> regression=# select power('-inf'::float8, '-3');\n> ERROR: value out of range: underflow\n> regression=# select power('-inf'::float8, '-4');\n> ERROR: value out of range: underflow\n>\n> contradicting POSIX which says\n>\n> For |x| > 1, if y is -Inf, +0 shall be returned.\n>\n> For |x| < 1, if y is +Inf, +0 shall be returned.\n>\n> For y an odd integer < 0, if x is -Inf, -0 shall be returned.\n>\n> For y < 0 and not an odd integer, if x is -Inf, +0 shall be returned.\n>\n\n\nI've had the same issue with multiplying two tiny numbers. Select\n2e-300::float * 2e-300::float gives an underflow, and it is not a wanted\nthing. This looks like handmade implementation of IEEE754's underflow\nexception that should be an optional return flag in addition to well\ndefined number, but became a stop-the-world exception instead. Had to build\ncustom Postgres with that logic ripped off in the past to be able to\nmultiply numbers. Will be happy if that \"underflow\" (and overflow) thing is\nremoved.\n\nIf in doubt whether this exception should be removed, to follow the spec\nfully in this way you have to also raise exception on any inexact result of\noperations on floats.\n\n\n\n\n\n\n\n> regards, tom lane\n>\n>\n>\n\nпт, 12 чэр 2020, 02:57 карыстальнік Tom Lane <tgl@sss.pgh.pa.us> напісаў:I wrote:\n> The POSIX standard says this about the exp(3) function:\n> If x is -Inf, +0 shall be returned.\n> At least on my Linux box, our version does no such thing:\n> regression=# select exp('-inf'::float8);\n> ERROR: value out of range: underflow\n\nNow that I look, power() has similar issues:\n\nregression=# select power('1.1'::float8, '-inf');\nERROR: value out of range: underflow\nregression=# select power('0.1'::float8, 'inf');\nERROR: value out of range: underflow\nregression=# select power('-inf'::float8, '-3');\nERROR: value out of range: underflow\nregression=# select power('-inf'::float8, '-4');\nERROR: value out of range: underflow\n\ncontradicting POSIX which says\n\nFor |x| > 1, if y is -Inf, +0 shall be returned.\n\nFor |x| < 1, if y is +Inf, +0 shall be returned.\n\nFor y an odd integer < 0, if x is -Inf, -0 shall be returned.\n\nFor y < 0 and not an odd integer, if x is -Inf, +0 shall be returned.I've had the same issue with multiplying two tiny numbers. Select 2e-300::float * 2e-300::float gives an underflow, and it is not a wanted thing. This looks like handmade implementation of IEEE754's underflow exception that should be an optional return flag in addition to well defined number, but became a stop-the-world exception instead. Had to build custom Postgres with that logic ripped off in the past to be able to multiply numbers. Will be happy if that \"underflow\" (and overflow) thing is removed.If in doubt whether this exception should be removed, to follow the spec fully in this way you have to also raise exception on any inexact result of operations on floats. \n\n regards, tom lane",
"msg_date": "Fri, 12 Jun 2020 03:36:26 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: exp() versus the POSIX standard"
},
{
"msg_contents": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n> I've had the same issue with multiplying two tiny numbers. Select\n> 2e-300::float * 2e-300::float gives an underflow, and it is not a wanted\n> thing. This looks like handmade implementation of IEEE754's underflow\n> exception that should be an optional return flag in addition to well\n> defined number, but became a stop-the-world exception instead.\n\nSolving that problem is very far outside the scope of what I'm interested\nin here. I think that we'd probably regret it if we try to support IEEE\nsubnormals, for example --- I know that all modern hardware is probably\ngood with those, but I'd bet against different platforms' libc functions\nall behaving the same. I don't see a sane way to offer user control over\nwhether we throw underflow errors or not, either. (Do you really want \"+\"\nto stop being immutable?) The darker corners of IEEE754, like inexactness\nexceptions, are even less likely to be implemented consistently\neverywhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jun 2020 21:25:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: exp() versus the POSIX standard"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jun 12, 2020 at 4:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>\n> writes:\n> > I've had the same issue with multiplying two tiny numbers. Select\n> > 2e-300::float * 2e-300::float gives an underflow, and it is not a wanted\n> > thing. This looks like handmade implementation of IEEE754's underflow\n> > exception that should be an optional return flag in addition to well\n> > defined number, but became a stop-the-world exception instead.\n>\n> Solving that problem is very far outside the scope of what I'm interested\n> in here.\n\n\nThey're essentially the same issue.\n\nGenerally, it exists from the very beginning of git and seems to be a\nseries of misunderstandings.\n\nInitially, somewhere around 1996, someone thought that a double goes only\nfrom DBL_MIN to DBL_MAX, just like INT_MIN and INT_MAX, while they aren't\nexactly that:\nhttps://github.com/postgres/postgres/blame/8fecd4febf8357f3cc20383ed29ced484877d5ac/src/backend/utils/adt/float.c#L525\n\nThat logic seems to be sane in float4 case (where computation is done in\n64bit and then checked to fit into 32bit without an overflow).\nIt feels like the float8 case got there just by copy-paste, but maybe it\nwas also used to not handle NaNs - it's not there in cmp's yett.\n\nLater in 2007 Bruce Momjian removed the limitation on Infinities, but kept\nthe general structure - now subnormals are accepted, as DBL_MIN is no\nlonger used, but there is still a check that underflow occurred.\nhttps://github.com/postgres/postgres/commit/f9ac414c35ea084ff70c564ab2c32adb06d5296f#diff-7068290137a01263be83308699042f1fR58\n\n\n\n> I think that we'd probably regret it if we try to support IEEE\n> subnormals, for example --- I know that all modern hardware is probably\n> good with those, but I'd bet against different platforms' libc functions\n> all behaving the same.\n\n\nYou don't need to support them. You just have them already.\n\n\n> I don't see a sane way to offer user control over\n> whether we throw underflow errors or not, either.\n\n\nIEEE754 talks about CPU design. \"Exception\" there is not a postgres\nexception, that's an exceptional case in computation that may raise a flag.\nFor all those exceptional cases there is a well defined description of what\nvalue should be returned.\nhttps://en.wikipedia.org/wiki/IEEE_754#Exception_handling\n\nCurrent code looks like a misreading of what IEEE754 exception is, but upon\ncloser look it looks like a mutation of misunderstanding of what FLT_MIN is\nfor (FLT_TRUE_MIN that would fit there appeared only in C11 unfortunately).\n\n\n> (Do you really want \"+\" to stop being immutable?)\n\n\nNo, no kind of GUC switch is needed. Just drop underflow/overflow checks.\nYou'll get 0 or Infinity in expected places, and infinities are okay since\n2007.\n\nThe darker corners of IEEE754, like inexactness\n> exceptions, are even less likely to be implemented consistently\n> everywhere.\n>\n> regards, tom lane\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,On Fri, Jun 12, 2020 at 4:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n> I've had the same issue with multiplying two tiny numbers. Select\n> 2e-300::float * 2e-300::float gives an underflow, and it is not a wanted\n> thing. This looks like handmade implementation of IEEE754's underflow\n> exception that should be an optional return flag in addition to well\n> defined number, but became a stop-the-world exception instead.\n\nSolving that problem is very far outside the scope of what I'm interested\nin here. They're essentially the same issue. Generally, it exists from the very beginning of git and seems to be a series of misunderstandings.Initially, somewhere around 1996, someone thought that a double goes only from DBL_MIN to DBL_MAX, just like INT_MIN and INT_MAX, while they aren't exactly that:https://github.com/postgres/postgres/blame/8fecd4febf8357f3cc20383ed29ced484877d5ac/src/backend/utils/adt/float.c#L525That logic seems to be sane in float4 case (where computation is done in 64bit and then checked to fit into 32bit without an overflow).It feels like the float8 case got there just by copy-paste, but maybe it was also used to not handle NaNs - it's not there in cmp's yett.Later in 2007 Bruce Momjian removed the limitation on Infinities, but kept the general structure - now subnormals are accepted, as DBL_MIN is no longer used, but there is still a check that underflow occurred.https://github.com/postgres/postgres/commit/f9ac414c35ea084ff70c564ab2c32adb06d5296f#diff-7068290137a01263be83308699042f1fR58 I think that we'd probably regret it if we try to support IEEE\nsubnormals, for example --- I know that all modern hardware is probably\ngood with those, but I'd bet against different platforms' libc functions\nall behaving the same. You don't need to support them. You just have them already. I don't see a sane way to offer user control over\nwhether we throw underflow errors or not, either. IEEE754 talks about CPU design. \"Exception\" there is not a postgres exception, that's an exceptional case in computation that may raise a flag.For all those exceptional cases there is a well defined description of what value should be returned.https://en.wikipedia.org/wiki/IEEE_754#Exception_handlingCurrent code looks like a misreading of what IEEE754 exception is, but upon closer look it looks like a mutation of misunderstanding of what FLT_MIN is for (FLT_TRUE_MIN that would fit there appeared only in C11 unfortunately). (Do you really want \"+\" to stop being immutable?) No, no kind of GUC switch is needed. Just drop underflow/overflow checks. You'll get 0 or Infinity in expected places, and infinities are okay since 2007.The darker corners of IEEE754, like inexactness\nexceptions, are even less likely to be implemented consistently\neverywhere.\n\n regards, tom lane\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Fri, 12 Jun 2020 05:58:51 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: exp() versus the POSIX standard"
},
{
"msg_contents": "> Does anyone disagree that that's a bug? Should we back-patch\n> a fix, or just change it in HEAD? Given the lack of user\n> complaints, I lean a bit towards the latter, but am not sure.\n\nThe other functions and operators pay attention to not give an error\nwhen the input is Inf or 0. exp() and power() are at least\ninconsistent by doing so. I don't think this behavior is useful.\nAlthough it'd still be less risky to fix it in HEAD only.\n\n\n",
"msg_date": "Fri, 12 Jun 2020 16:16:14 +0100",
"msg_from": "Emre Hasegeli <emre@hasegeli.com>",
"msg_from_op": false,
"msg_subject": "Re: exp() versus the POSIX standard"
},
{
"msg_contents": "> No, no kind of GUC switch is needed. Just drop underflow/overflow checks. You'll get 0 or Infinity in expected places, and infinities are okay since 2007.\n\nThis is out of scope of this thread. I am not sure opening it to\ndiscussion on another thread would yield any result. Experienced\ndevelopers like Tom appear to be in agreement of us needing to protect\nusers from oddities of floating point numbers. (I am not.)\n\n\n",
"msg_date": "Fri, 12 Jun 2020 16:34:15 +0100",
"msg_from": "Emre Hasegeli <emre@hasegeli.com>",
"msg_from_op": false,
"msg_subject": "Re: exp() versus the POSIX standard"
},
{
"msg_contents": "Emre Hasegeli <emre@hasegeli.com> writes:\n>> No, no kind of GUC switch is needed. Just drop underflow/overflow checks. You'll get 0 or Infinity in expected places, and infinities are okay since 2007.\n\n> This is out of scope of this thread.\n\nYeah, that. At the moment I'm just interested in making the float and\nnumeric functions give equivalent results for infinite inputs. If you\nwant to make a more general proposal about removing error checks, that\nseems like a separate topic.\n\n> I am not sure opening it to\n> discussion on another thread would yield any result. Experienced\n> developers like Tom appear to be in agreement of us needing to protect\n> users from oddities of floating point numbers. (I am not.)\n\nI think there's a pretty fundamental distinction between this behavior:\n\nregression=# select exp('-inf'::float8);\n exp \n-----\n 0\n(1 row)\n\nand this one:\n\nregression=# select exp('-1000'::float8);\nERROR: value out of range: underflow\n\nIn the first case, zero is the correct answer to any precision you care\nto name. In the second case, zero is *not* the correct answer; we simply\ncannot represent the correct answer (somewhere around 1e-434) as a float8.\nReturning zero would represent 100% loss of accuracy. Now, there may well\nbe applications where you'd rather take the zero result and press on, but\nI'd argue that they're subtle ones that you're not likely gonna be writing\nin SQL.\n\nAnyway, for now I propose the attached patch. The test cases inquire into\nthe edge-case behavior of pow() much more closely than we have done in the\npast, and I wouldn't be a bit surprised if some of the older buildfarm\ncritters fail some of them. So my inclination is to try this only in\nHEAD for starters. Even if we want to back-patch, I'd be hesitant to\nput it in versions older than v12, where we started to require C99.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 13 Jun 2020 19:12:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: exp() versus the POSIX standard"
}
] |
[
{
"msg_contents": "We had a discussion recently about how it'd be a good idea to support\ninfinity values in type numeric [1]. Here's a draft patch enabling\nthat, using the idea suggested in that thread of commandeering some\nunused bits in the representation of numeric NaNs. AFAICT we've been\ncareful to ensure those bits are always zero, so that this will work\nwithout creating any pg_upgrade problems.\n\nThis is just WIP, partly because I haven't touched the SGML docs\nyet, but also because there are some loose ends to be resolved:\n\n* I believe I made all the functions that correspond to POSIX-standard\nfunctions do what POSIX says for infinite inputs. However, this does\nnot always match what our existing float8 functions do [2]. I'm\nassuming that we'll change those functions to match POSIX; but if we\ndon't, this might need another look.\n\n* I had to invent some semantics for non-standardized functions,\nparticularly numeric_mod, numeric_gcd, numeric_lcm. This area\ncould use review to be sure that I chose desirable behaviors.\n\n* I'm only about 50% sure that I understand what the sort abbreviation\ncode is doing. A quick look from Peter or some other expert would be\nhelpful.\n\n* It seems to me that the existing behavior of numeric_stddev_internal\nis not quite right for the case of a single input value that is a NaN,\nwhen in \"sample\" mode. Per the comment \"Sample stddev and variance are\nundefined when N <= 1\", ISTM that we ought to return NULL in this case,\nbut actually you get a NaN because the check for \"NaNcount > 0\" is made\nbefore considering that. I think that's the wrong way round --- in some\nsense NULL is \"less defined\" than NaN, so that's what we ought to use.\nMoreover, the float8 stddev code agrees: in HEAD you get\n\nregression=# SELECT stddev_samp('nan'::float8);\n stddev_samp \n-------------\n \n(1 row)\n\nregression=# SELECT stddev_samp('nan'::numeric);\n stddev_samp \n-------------\n NaN\n(1 row)\n\nSo I think we ought to make the numeric code match the former, and have\ndone that here. However, the float8 code has its own issues for the\npopulation case [3], and depending on what we do about that, this might\nneed further changes to agree. (There's also the question of whether to\nback-patch any such bug fixes.)\n\n* The jsonpath code is inconsistent about how it handles NaN vs Inf [4].\nI'm assuming here that we'll fix that by rejecting NaNs in that code,\nbut if we conclude that we do need to allow non-finite double values\nthere, probably we need to allow Infs too.\n\n* It seems like there might be a use-case for isfinite() and maybe\nisnan() SQL functions. On the other hand, we don't have those for\nfloat4/float8 either. These could be a follow-on addition, anyway.\n\nI'll stick this in the queue for review.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/27490.1590414212%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/flat/582552.1591917752%40sss.pgh.pa.us\n[3] https://www.postgresql.org/message-id/353062.1591898766%40sss.pgh.pa.us\n[4] https://www.postgresql.org/message-id/flat/203949.1591879542%40sss.pgh.pa.us",
"msg_date": "Thu, 11 Jun 2020 21:16:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Infinities in type numeric"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> * I'm only about 50% sure that I understand what the sort\n Tom> abbreviation code is doing. A quick look from Peter or some other\n Tom> expert would be helpful.\n\nThat code was originally mine, so I'll look at it.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 12 Jun 2020 08:25:52 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> @@ -359,10 +390,14 @@ typedef struct NumericSumAccum\n Tom> #define NumericAbbrevGetDatum(X) ((Datum) (X))\n Tom> #define DatumGetNumericAbbrev(X) ((int64) (X))\n Tom> #define NUMERIC_ABBREV_NAN\t\t NumericAbbrevGetDatum(PG_INT64_MIN)\n Tom> +#define NUMERIC_ABBREV_PINF\t\t NumericAbbrevGetDatum(PG_INT64_MIN)\n Tom> +#define NUMERIC_ABBREV_NINF\t\t NumericAbbrevGetDatum(PG_INT64_MAX)\n Tom> #else\n Tom> #define NumericAbbrevGetDatum(X) ((Datum) (X))\n Tom> #define DatumGetNumericAbbrev(X) ((int32) (X))\n Tom> #define NUMERIC_ABBREV_NAN\t\t NumericAbbrevGetDatum(PG_INT32_MIN)\n Tom> +#define NUMERIC_ABBREV_PINF\t\t NumericAbbrevGetDatum(PG_INT32_MIN)\n Tom> +#define NUMERIC_ABBREV_NINF\t\t NumericAbbrevGetDatum(PG_INT32_MAX)\n Tom> #endif\n\nI'd have been more inclined to go with -PG_INT64_MAX / -PG_INT32_MAX for\nthe NUMERIC_ABBREV_PINF value. It seems more likely to be beneficial to\nbucket +Inf and NaN separately (and put +Inf in with the \"too large to\nabbreviate\" values) than to bucket them together so as to distinguish\nbetween +Inf and \"too large\" values. But this is an edge case in any\nevent, so it probably wouldn't make a great deal of difference unless\nyou're sorting on data with a large proportion of both +Inf and NaN\nvalues.\n\n(It would be possible to add another bucket so that \"too large\", +Inf,\nand NaN were three separate buckets, but honestly any more complexity\nseems not worth it for handling an edge case.)\n\nThe actual changes to the abbrev stuff look fine to me.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 12 Jun 2020 13:47:19 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> +#define NUMERIC_ABBREV_PINF\t\t NumericAbbrevGetDatum(PG_INT64_MIN)\n> Tom> +#define NUMERIC_ABBREV_PINF\t\t NumericAbbrevGetDatum(PG_INT32_MIN)\n\n> I'd have been more inclined to go with -PG_INT64_MAX / -PG_INT32_MAX for\n> the NUMERIC_ABBREV_PINF value. It seems more likely to be beneficial to\n> bucket +Inf and NaN separately (and put +Inf in with the \"too large to\n> abbreviate\" values) than to bucket them together so as to distinguish\n> between +Inf and \"too large\" values. But this is an edge case in any\n> event, so it probably wouldn't make a great deal of difference unless\n> you're sorting on data with a large proportion of both +Inf and NaN\n> values.\n\nI had been worried about things possibly sorting in the wrong order\nif I did that. However, now that I look more closely I see that\n\n * We convert the absolute value of the numeric to a 31-bit or 63-bit positive\n * value, and then negate it if the original number was positive.\n\nso that a finite value should never map to INT[64]_MIN, making it\nsafe to do as you suggest. I agree that distinguishing +Inf from NaN\nis probably more useful than distinguishing it from the very largest\nclass of finite values, so will do it as you suggest. Thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jun 2020 09:41:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We had a discussion recently about how it'd be a good idea to support\n> infinity values in type numeric [1].\n\nOnly a minority of that discussion was actually on that topic, and I'm\nnot sure there was a clear consensus in favor of it.\n\nFWIW, I don't particularly like the idea. Back when I was an\napplication developer, I remember having to insert special cases into\nany code that dealt with double precision to deal with +/-Inf and NaN.\nI was happy that I didn't need them for numeric, too. So this change\nwould have made me sad.\n\nIt's possible I'm the only one, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Jun 2020 12:51:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 11, 2020 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We had a discussion recently about how it'd be a good idea to support\n>> infinity values in type numeric [1].\n\n> FWIW, I don't particularly like the idea. Back when I was an\n> application developer, I remember having to insert special cases into\n> any code that dealt with double precision to deal with +/-Inf and NaN.\n> I was happy that I didn't need them for numeric, too. So this change\n> would have made me sad.\n\nWell, you're already stuck with special-casing numeric NaN, so I'm\nnot sure that Inf makes your life noticeably worse on that score.\n\nThis does tie into something I have a question about in the patch's\ncomments though. As the patch stands, numeric(numeric, integer)\n(that is, the typmod-enforcement function) just lets infinities\nthrough regardless of the typmod, on the grounds that it is/was also\nletting NaNs through regardless of typmod. But you could certainly\nmake the argument that Inf should only be allowed in an unconstrained\nnumeric column, because by definition it overflows any finite precision\nrestriction. If we did that, you'd never see Inf in a\nstandard-conforming column, since SQL doesn't allow unconstrained\nnumeric columns IIRC. That'd at least ameliorate your concern.\n\nIf we were designing this today, I think I'd vote to disallow NaN\nin a constrained numeric column, too. But I suppose it's far too\nlate to change that aspect.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jun 2020 13:00:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 1:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This does tie into something I have a question about in the patch's\n> comments though. As the patch stands, numeric(numeric, integer)\n> (that is, the typmod-enforcement function) just lets infinities\n> through regardless of the typmod, on the grounds that it is/was also\n> letting NaNs through regardless of typmod. But you could certainly\n> make the argument that Inf should only be allowed in an unconstrained\n> numeric column, because by definition it overflows any finite precision\n> restriction. If we did that, you'd never see Inf in a\n> standard-conforming column, since SQL doesn't allow unconstrained\n> numeric columns IIRC. That'd at least ameliorate your concern.\n\nYes, I agree. It also seems like a more principled choice - I am not\nsure why if I ask for a number no larger than 10^3 we ought to permit\ninfinity.\n\nBTW, has there been any thought to supporting a negative scale for the\nnumeric data type? If you can cut off digits after the decimal, why\nnot before?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Jun 2020 13:44:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n [...]\n Tom> so that a finite value should never map to INT[64]_MIN, making it\n Tom> safe to do as you suggest. I agree that distinguishing +Inf from\n Tom> NaN is probably more useful than distinguishing it from the very\n Tom> largest class of finite values, so will do it as you suggest.\n Tom> Thanks!\n\nIt would make sense to make sure there's a test case in which at least\none value of all three of: a finite value much greater than 10^332, a\n+Inf, and a NaN were all present in the same sort, if there isn't one\nalready.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 12 Jun 2020 19:02:37 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> BTW, has there been any thought to supporting a negative scale for the\n> numeric data type? If you can cut off digits after the decimal, why\n> not before?\n\nHm, would there be any real use-case?\n\nAn implementation issue is that even in the \"long\" numeric format,\nwe cram dscale into a 14-bit unsigned field. You could redefine\nthe field as signed and pray that nobody has dscales above 8K\nstored on disk, but I'm dubious that there's a good argument for\ntaking that risk.\n\nThere might be algorithmic issues as well, haven't really looked.\nAny such problems would probably be soluble, if need be by forcing\nthe scale to be at least 0 for calculation and then rounding\nafterwards.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jun 2020 14:14:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 2:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > BTW, has there been any thought to supporting a negative scale for the\n> > numeric data type? If you can cut off digits after the decimal, why\n> > not before?\n>\n> Hm, would there be any real use-case?\n\nCompatibility... apparently people do use it.\n\n> An implementation issue is that even in the \"long\" numeric format,\n> we cram dscale into a 14-bit unsigned field. You could redefine\n> the field as signed and pray that nobody has dscales above 8K\n> stored on disk, but I'm dubious that there's a good argument for\n> taking that risk.\n\nThat doesn't sound too appealing I guess, but couldn't you enforce it\nas a typemod without changing the on-disk representation of the\nvalues?\n\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Jun 2020 15:45:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jun 12, 2020 at 2:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> BTW, has there been any thought to supporting a negative scale for the\n>>> numeric data type? If you can cut off digits after the decimal, why\n>>> not before?\n\n>> Hm, would there be any real use-case?\n\n> Compatibility... apparently people do use it.\n\nUh, compatibility with what? Not the SQL spec, for sure.\n\n>> An implementation issue is that even in the \"long\" numeric format,\n>> we cram dscale into a 14-bit unsigned field.\n\n> That doesn't sound too appealing I guess, but couldn't you enforce it\n> as a typemod without changing the on-disk representation of the\n> values?\n\nOn second thought, I'm confusing two distinct though related concepts.\ndscale is *display* scale, and it's fine that it's unsigned, because\nthere is no reason to suppress printing digits to the left of the decimal\npoint. (\"Whaddya mean, 10 is really 100?\") We could allow the \"scale\"\npart of typmod to be negative and thereby cause an input of, say,\n123.45 to be rounded to say 100 --- but it should display as 100 not 1,\nso its display scale is still 0.\n\nHence, there's no pg_upgrade issue. You'd still need to rethink how\nprecision and scale get packed into an int32 typmod, but those only\nexist in catalog data, so pg_upgrade's schema dump/restore would be\nenough to update them.\n\nHaving said that, we've long resisted redefining the encoding of\ntypmod for other data types (despite the clear insanity of some\nof the encodings), for fear that client code might be looking at\nthose catalog columns. I'm not sure how big a deal that really is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jun 2020 16:06:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Here's a v2 patch:\n\n* Rebased over today's nearby commits\n\n* Documentation changes added\n\n* Sort abbrev support improved per Andrew's suggestions\n\n* Infinities now considered to fail any typmod precision limit,\n per discussion with Robert.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 13 Jun 2020 16:33:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "On Fri, 12 Jun 2020 at 02:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> * I had to invent some semantics for non-standardized functions,\n> particularly numeric_mod, numeric_gcd, numeric_lcm. This area\n> could use review to be sure that I chose desirable behaviors.\n>\n\nI think the semantics you've chosen for numeric_mod() are reasonable,\nand I think they're consistent with POSIX fmod().\n\nHowever, it looks like you've chosen gcd('Inf', x) = x, whereas I'd\nsay that the result should be 'NaN'.\n\nOne way to look at it is that the GCD result should exactly divide\nboth inputs with no remainder, but the remainder when you divide 'Inf'\nby x is undefined, so you can't say that x exactly divides 'Inf'.\n\nAnother way to look at it is that gcd('Inf', x) is limit(n -> 'Inf',\ngcd(n, x)), but that limit isn't well-defined. For example, suppose\nx=10, then gcd('Inf', 10) = limit(n -> 'Inf', gcd(n, 10)), but gcd(n,\n10) is either 1,2,5 or 10 depending on n, and it does not converge to\nany particular value in the limit n -> 'Inf'.\n\nA third way to look at it would be to apply one round of Euclid's\nalgorithm to it: gcd('Inf', x) = gcd(x, mod('Inf', x)) = gcd(x, 'NaN')\n= 'NaN'.\n\nNow you could argue that x=0 is a special case, and gcd('Inf', 0) =\n'Inf' on the grounds that gcd(a, 0) = a for all finite 'a'. However, I\ndon't think that's particularly useful, and it fails the first test\nthat the result exactly divides both inputs because mod('Inf', 'Inf')\nis undefined ('NaN').\n\nSimilar arguments apply to lcm(), so I'd say that both gcd() and lcm()\nshould return 'NaN' if either input is 'Inf' or 'NaN'.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 15 Jun 2020 10:13:24 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "On 6/12/20 7:00 PM, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Thu, Jun 11, 2020 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> We had a discussion recently about how it'd be a good idea to support\n>>> infinity values in type numeric [1].\n> \n>> FWIW, I don't particularly like the idea. Back when I was an\n>> application developer, I remember having to insert special cases into\n>> any code that dealt with double precision to deal with +/-Inf and NaN.\n>> I was happy that I didn't need them for numeric, too. So this change\n>> would have made me sad.\n> \n> Well, you're already stuck with special-casing numeric NaN, so I'm\n> not sure that Inf makes your life noticeably worse on that score.\n> \n> This does tie into something I have a question about in the patch's\n> comments though. As the patch stands, numeric(numeric, integer)\n> (that is, the typmod-enforcement function) just lets infinities\n> through regardless of the typmod, on the grounds that it is/was also\n> letting NaNs through regardless of typmod. But you could certainly\n> make the argument that Inf should only be allowed in an unconstrained\n> numeric column, because by definition it overflows any finite precision\n> restriction. If we did that, you'd never see Inf in a\n> standard-conforming column, since SQL doesn't allow unconstrained\n> numeric columns IIRC.\n\n\nIt does. The precision and scale are both optional.\n\nIf the precision is missing, it's implementation defined; if the scale\nis missing, it's 0.\n-- \nVik Fearing\n\n\n",
"msg_date": "Tue, 16 Jun 2020 10:28:55 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 6/12/20 7:00 PM, Tom Lane wrote:\n>> If we did that, you'd never see Inf in a\n>> standard-conforming column, since SQL doesn't allow unconstrained\n>> numeric columns IIRC.\n\n> It does. The precision and scale are both optional.\n> If the precision is missing, it's implementation defined; if the scale\n> is missing, it's 0.\n\nAh, right, the way in which we deviate from the spec is that an\nunconstrained numeric column doesn't coerce every entry to scale 0.\n\nStill, that *is* a spec deviation, so adding \"... and it allows Inf\"\ndoesn't seem like it's making things worse for spec-compliant apps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 09:33:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Fri, 12 Jun 2020 at 02:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * I had to invent some semantics for non-standardized functions,\n>> particularly numeric_mod, numeric_gcd, numeric_lcm. This area\n>> could use review to be sure that I chose desirable behaviors.\n\n> I think the semantics you've chosen for numeric_mod() are reasonable,\n> and I think they're consistent with POSIX fmod().\n\nAh, I had not thought to look at fmod(). I see that POSIX treats\nx-is-infinite the same as y-is-zero: raise EDOM and return NaN.\nI think it's okay to deviate to the extent of throwing an error in\none case and returning NaN in the other, but I added a comment\nnoting the difference.\n\n> Similar arguments apply to lcm(), so I'd say that both gcd() and lcm()\n> should return 'NaN' if either input is 'Inf' or 'NaN'.\n\nWorks for me; that's easier anyway.\n\nThe attached v3 patch fixes these things and also takes care of an\noversight in v2: I'd made numeric() apply typmod restrictions to Inf,\nbut not numeric_in() or numeric_recv(). I believe the patch itself\nis in pretty good shape now, though there are still some issues\nelsewhere as noted in the first message in this thread.\n\nThanks for reviewing!\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 16 Jun 2020 13:24:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "On Tue, 16 Jun 2020 at 18:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> The attached v3 patch fixes these things and also takes care of an\n> oversight in v2: I'd made numeric() apply typmod restrictions to Inf,\n> but not numeric_in() or numeric_recv(). I believe the patch itself\n> is in pretty good shape now, though there are still some issues\n> elsewhere as noted in the first message in this thread.\n>\n\nI had a look at this, and I think it's mostly in good shape. It looks\nlike everything from the first message in this thread has been\nresolved, except I don't know about the jsonpath stuff, because I\nhaven't been following that.\n\nI tried to go over all the edge cases and I think they all make sense,\nexcept for a couple of cases which I've listed below, along with a few\nother minor comments:\n\n1). I don't think that the way in_range() handles infinities is quite\nright. For example:\n\nSELECT in_range('inf'::numeric, 10::numeric, 'inf'::numeric, false, false);\n in_range\n----------\n f\n(1 row)\n\nBut I think that should return \"val >= base + offset\", which is \"Inf\n>= Inf\", which should be true.\n\nSimilarly, I think this should return true:\n\nSELECT in_range('-inf'::numeric, 10::numeric, 'inf'::numeric, true, true);\n in_range\n----------\n f\n(1 row)\n\nI think this could use some test coverage.\n\n2). I think numeric_pg_lsn() needs updating -- this should probably be an error:\n\nSELECT pg_lsn('inf'::numeric);\n pg_lsn\n--------\n 0/0\n(1 row)\n\n3). In the bottom half of numeric.c, there's a section header comment\nsaying \"Local functions follow ... In general, these do not support\nNaNs ...\". That should probably also mention infinities. There are\nalso now more functions to mention that are exceptions to that comment\nabout not handling NaN/Inf, but I think that some of the new\nexceptions can be avoided.\n\n4). The comment for set_var_from_str() mentions that it doesn't handle\n\"NaN\", so on the face of it, it ought to also mention that it doesn't\nhandle \"Infinity\" either. However, this is only a few lines down from\nthat \"Local functions follow ...\" section header comment, which\nalready covers that, so it seems pointless mentioning NaNs and\ninfinities again for this function (none of the other local functions\nin that section of the file do).\n\n5). It seems a bit odd that numeric_to_double_no_overflow() handles\ninfinite inputs, but not NaN inputs, while its only caller\nnumeric_float8_no_overflow() handles NaNs, but not infinities. ISTM\nthat it would be neater to have all the special-handling in one place\n(in the caller). That would also stop numeric_to_double_no_overflow()\nbeing an exception to the preceding section header comment about local\nfunctions not handling Nan/Inf. In fact, I wonder why keep\nnumeric_to_double_no_overflow() at all? It could just be rolled into\nits caller, making it more like numeric_float8().\n\n6). The next function, numericvar_to_double_no_overflow(), has a\ncomment that just says \"As above, but work from a NumericVar\", but it\nisn't really \"as above\" anymore, since it doesn't handle infinite\ninputs. Depending on what happens to numeric_to_double_no_overflow(),\nthis function's comment might need some tweaking.\n\n7). The new function numeric_is_integral() feels out of place where it\nis, amongst arithmetic functions operating on NumericVar's, because it\noperates on a Numeric, and also because it handles NaNs, making it\nanother exception to the preceding comment about local functions that\ndon't handle NaNs. Perhaps it would fit in better after\nnumeric_is_nan() and numeric_is_inf(). Even though it's a local\nfunction, it feels more akin to those functions.\n\nFinally, not really in the scope of this patch, but something I\nnoticed anyway while looking at edge cases -- float and numeric handle\nNaN/0 differently:\n\nSELECT 'nan'::float8 / 0::float8;\nERROR: division by zero\n\nSELECT 'nan'::numeric / 0::numeric;\n ?column?\n----------\n NaN\n\nI'm not sure if this is worth worrying about, or which behaviour is\npreferable though.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 16 Jul 2020 00:39:33 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I had a look at this, and I think it's mostly in good shape. It looks\n> like everything from the first message in this thread has been\n> resolved, except I don't know about the jsonpath stuff, because I\n> haven't been following that.\n\nThanks for the careful review! Yeah, Alexander fixed the jsonpath\nstuff at df646509f, so I think all my original concerns are cleared,\nother than the question of whether to invent isfinite() and isnan()\nSQL functions. That seems like follow-on work in any case.\n\n> 1). I don't think that the way in_range() handles infinities is quite\n> right. For example:\n\n> SELECT in_range('inf'::numeric, 10::numeric, 'inf'::numeric, false, false);\n> in_range\n> ----------\n> f\n> (1 row)\n\n> But I think that should return \"val >= base + offset\", which is \"Inf\n> >= Inf\", which should be true.\n\nHmm. I modeled the logic on the float8 in_range code, which does the\nsame thing:\n\n# SELECT in_range('inf'::float8, 10::float8, 'inf'::float8, false, false);\n in_range \n----------\n f\n(1 row)\n\nIt does seem like this is wrong per the specification of in_range, though,\nso do we have a bug to fix in the float in_range support? If so I'd\nbe inclined to go correct that first and then adapt the numeric patch\nto match.\n\n> Similarly, I think this should return true:\n> SELECT in_range('-inf'::numeric, 10::numeric, 'inf'::numeric, true, true);\n\nSame comment.\n\n> I think this could use some test coverage.\n\nEvidently :-(\n\n> 2). I think numeric_pg_lsn() needs updating -- this should probably be an error:\n\nOh, that was not there when I produced my patch. Will cover it in the\nnext version.\n\nI agree with your other comments and will update the patch.\n\n> Finally, not really in the scope of this patch, but something I\n> noticed anyway while looking at edge cases -- float and numeric handle\n> NaN/0 differently:\n> SELECT 'nan'::float8 / 0::float8;\n> ERROR: division by zero\n> SELECT 'nan'::numeric / 0::numeric;\n> ?column?\n> ----------\n> NaN\n\nHmm. It seems like we generally ought to try to follow IEEE 754\nfor the semantics of operations on NaN, but I don't have a copy of\nthat spec so I'm not sure which result it specifies for this.\nI agree that being inconsistent between the two types is not what\nwe want.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jul 2020 12:43:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "I wrote:\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> I had a look at this, and I think it's mostly in good shape. It looks\n>> like everything from the first message in this thread has been\n>> resolved, except I don't know about the jsonpath stuff, because I\n>> haven't been following that.\n\n> Thanks for the careful review!\n\nHere's a v4 that syncs numeric in_range() with the new behavior of\nfloat in_range(), and addresses your other comments too.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 21 Jul 2020 18:18:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "On Tue, 21 Jul 2020 at 23:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Here's a v4 that syncs numeric in_range() with the new behavior of\n> float in_range(), and addresses your other comments too.\n>\n\nLGTM.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 22 Jul 2020 22:54:57 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinities in type numeric"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Tue, 21 Jul 2020 at 23:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a v4 that syncs numeric in_range() with the new behavior of\n>> float in_range(), and addresses your other comments too.\n\n> LGTM.\n\nPushed. Thanks again for reviewing!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Jul 2020 19:20:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Infinities in type numeric"
}
] |
[
{
"msg_contents": "Is it possible to make cube extension with float(4bytes) precision instead\nof double(8bytes)?\n\nI use cube extension for storing embedding vectors and calculation distance\non them. During comparing vectors, a 4byte float precision is enough.\nStoring 8 byte double precision is wasting disk space.\n\nNow to avoid disk wasting I store vectors as real[] array and create cube\nobjects on the fly. But this solution is wasting cpu time\n\n--\n\nSiarhei Damanau\n\nIs it possible to make cube extension with float(4bytes) precision instead of double(8bytes)?I use cube extension for storing embedding vectors and calculation distance on them. During comparing vectors, a 4byte float precision is enough. Storing 8 byte double precision is wasting disk space.Now to avoid disk wasting I store vectors as real[] array and create cube objects on the fly. But this solution is wasting cpu time--Siarhei Damanau",
"msg_date": "Fri, 12 Jun 2020 14:41:08 +0300",
"msg_from": "Siarhei D <siarhei.damanau@gmail.com>",
"msg_from_op": true,
"msg_subject": "compile cube extension with float4 precision storing"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 02:41:08PM +0300, Siarhei D wrote:\n>Is it possible to make cube extension with float(4bytes) precision instead\n>of double(8bytes)?\n>\n>I use cube extension for storing embedding vectors and calculation distance\n>on them. During comparing vectors, a 4byte float precision is enough.\n>Storing 8 byte double precision is wasting disk space.\n>\n>Now to avoid disk wasting I store vectors as real[] array and create cube\n>objects on the fly. But this solution is wasting cpu time\n>\n\nI don't think there's a built-in support for that, so the only option I\ncan think of is creating your own cube \"copy\" extension using float4.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 19:30:30 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: compile cube extension with float4 precision storing"
}
] |
[
{
"msg_contents": "Hi,\n\nThe document explains that \"lost\" value that\npg_replication_slots.wal_status reports means\n\n some WAL files are definitely lost and this slot cannot be used to resume replication anymore.\n\nHowever, I observed \"lost\" value while inserting lots of records,\nbut replication could continue normally. So I wonder if\npg_replication_slots.wal_status may have a bug.\n\nwal_status is calculated in GetWALAvailability(), and probably I found\nsome issues in it.\n\n\n\tkeepSegs = ConvertToXSegs(Max(max_wal_size_mb, wal_keep_segments),\n\t\t\t\t\t\t\t wal_segment_size) + 1;\n\nmax_wal_size_mb is the number of megabytes. wal_keep_segments is\nthe number of WAL segment files. So it's strange to calculate max of them.\nThe above should be the following?\n\n Max(ConvertToXSegs(max_wal_size_mb, wal_segment_size), wal_keep_segments) + 1\n\n\n\n\t\tif ((max_slot_wal_keep_size_mb <= 0 ||\n\t\t\t max_slot_wal_keep_size_mb >= max_wal_size_mb) &&\n\t\t\toldestSegMaxWalSize <= targetSeg)\n\t\t\treturn WALAVAIL_NORMAL;\n\nThis code means that wal_status reports \"normal\" only when\nmax_slot_wal_keep_size is negative or larger than max_wal_size.\nWhy is this condition necessary? The document explains \"normal\n means that the claimed files are within max_wal_size\". So whatever\n max_slot_wal_keep_size value is, IMO that \"normal\" should be\n reported if the WAL files claimed by the slot are within max_wal_size.\n Thought?\n\nOr, if that condition is really necessary, the document should be\nupdated so that the note about the condition is added.\n\n\n\nIf the WAL files claimed by the slot exceeds max_slot_wal_keep_size\nbut any those WAL files have not been removed yet, wal_status seems\nto report \"lost\". Is this expected behavior? Per the meaning of \"lost\"\ndescribed in the document, \"lost\" should be reported only when\nany claimed files are removed, I think. Thought?\n\nOr this behavior is expected and the document is incorrect?\n\n\n\nBTW, if we want to implement GetWALAvailability() as the document\nadvertises, we can simplify it like the attached POC patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Sat, 13 Jun 2020 01:38:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Review for GetWALAvailability()"
},
{
"msg_contents": "At Sat, 13 Jun 2020 01:38:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Hi,\n> \n> The document explains that \"lost\" value that\n> pg_replication_slots.wal_status reports means\n> \n> some WAL files are definitely lost and this slot cannot be used to\n> resume replication anymore.\n> \n> However, I observed \"lost\" value while inserting lots of records,\n> but replication could continue normally. So I wonder if\n> pg_replication_slots.wal_status may have a bug.\n> \n> wal_status is calculated in GetWALAvailability(), and probably I found\n> some issues in it.\n> \n> \n> \tkeepSegs = ConvertToXSegs(Max(max_wal_size_mb, wal_keep_segments),\n> \t\t\t\t\t\t\t wal_segment_size) +\n> \t\t\t\t\t\t\t 1;\n> \n> max_wal_size_mb is the number of megabytes. wal_keep_segments is\n> the number of WAL segment files. So it's strange to calculate max of\n> them.\n\nOops! I don't want to believe I did that but it's definitely wrong.\n\n> The above should be the following?\n> \n> Max(ConvertToXSegs(max_wal_size_mb, wal_segment_size),\n> wal_keep_segments) + 1\n\nLooks reasonable.\n\n> \t\tif ((max_slot_wal_keep_size_mb <= 0 ||\n> \t\t\t max_slot_wal_keep_size_mb >= max_wal_size_mb) &&\n> \t\t\toldestSegMaxWalSize <= targetSeg)\n> \t\t\treturn WALAVAIL_NORMAL;\n> \n> This code means that wal_status reports \"normal\" only when\n> max_slot_wal_keep_size is negative or larger than max_wal_size.\n> Why is this condition necessary? The document explains \"normal\n> means that the claimed files are within max_wal_size\". So whatever\n> max_slot_wal_keep_size value is, IMO that \"normal\" should be\n> reported if the WAL files claimed by the slot are within max_wal_size.\n> Thought?\n\nIt was a kind of hard to decide. Even when max_slot_wal_keep_size is\nsmaller than max_wal_size, the segments more than\nmax_slot_wal_keep_size are not guaranteed to be kept. In that case\nthe state transits as NORMAL->LOST skipping the \"RESERVED\" state.\nPutting aside whether the setting is useful or not, I thought that the\nstate transition is somewhat abrupt.\n\n> Or, if that condition is really necessary, the document should be\n> updated so that the note about the condition is added.\n\nDoes the following make sense?\n\nhttps://www.postgresql.org/docs/13/view-pg-replication-slots.html\n\nnormal means that the claimed files are within max_wal_size.\n+ If max_slot_wal_keep_size is smaller than max_wal_size, this state\n+ will not appear.\n\n> If the WAL files claimed by the slot exceeds max_slot_wal_keep_size\n> but any those WAL files have not been removed yet, wal_status seems\n> to report \"lost\". Is this expected behavior? Per the meaning of \"lost\"\n> described in the document, \"lost\" should be reported only when\n> any claimed files are removed, I think. Thought?\n> \n> Or this behavior is expected and the document is incorrect?\n\nIn short, it is known behavior but it was judged as useless to prevent\nthat.\n\nThat can happen when checkpointer removes up to the segment that is\nbeing read by walsender. I think that that doesn't happen (or\nhappenswithin a narrow time window?) for physical replication but\nhappenes for logical replication.\n\nWhile development, I once added walsender a code to exit for that\nreason, but finally it is moved to InvalidateObsoleteReplicationSlots\nas a bit defferent function. With the current mechanism, there's a\ncase where once invalidated slot came to revive but we decided to\nallow that behavior, but forgot to document that.\n\nAnyway if you see \"lost\", something bad is being happening.\n\n- lost means that some WAL files are definitely lost and this slot\n- cannot be used to resume replication anymore.\n+ lost means that some required WAL files are removed and this slot is\n+ no longer usable after once disconnected during this status.\n\nIf it is crucial that the \"lost\" state may come back to reserved or\nnormal state, \n\n+ Note that there are cases where the state moves back to reserved or\n+ normal state when all wal senders have left the just removed segment\n+ before being terminated.\n\nThere is a case where the state moves back to reserved or normal state when wal senders leaves the just removed segment before being terminated.\n\n> BTW, if we want to implement GetWALAvailability() as the document\n> advertises, we can simplify it like the attached POC patch.\n\nI'm not sure it is right that the patch removes wal_keep_segments from\nthe function.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 15 Jun 2020 13:42:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Mon, 15 Jun 2020 13:42:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Oops! I don't want to believe I did that but it's definitely wrong.\n\nHmm. Quite disappointing. The patch was just a crap.\nThis is the right patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 15 Jun 2020 16:31:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "\n\nOn 2020/06/15 13:42, Kyotaro Horiguchi wrote:\n> At Sat, 13 Jun 2020 01:38:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Hi,\n>>\n>> The document explains that \"lost\" value that\n>> pg_replication_slots.wal_status reports means\n>>\n>> some WAL files are definitely lost and this slot cannot be used to\n>> resume replication anymore.\n>>\n>> However, I observed \"lost\" value while inserting lots of records,\n>> but replication could continue normally. So I wonder if\n>> pg_replication_slots.wal_status may have a bug.\n>>\n>> wal_status is calculated in GetWALAvailability(), and probably I found\n>> some issues in it.\n>>\n>>\n>> \tkeepSegs = ConvertToXSegs(Max(max_wal_size_mb, wal_keep_segments),\n>> \t\t\t\t\t\t\t wal_segment_size) +\n>> \t\t\t\t\t\t\t 1;\n>>\n>> max_wal_size_mb is the number of megabytes. wal_keep_segments is\n>> the number of WAL segment files. So it's strange to calculate max of\n>> them.\n> \n> Oops! I don't want to believe I did that but it's definitely wrong.\n> \n>> The above should be the following?\n>>\n>> Max(ConvertToXSegs(max_wal_size_mb, wal_segment_size),\n>> wal_keep_segments) + 1\n> \n> Looks reasonable.\n> \n>> \t\tif ((max_slot_wal_keep_size_mb <= 0 ||\n>> \t\t\t max_slot_wal_keep_size_mb >= max_wal_size_mb) &&\n>> \t\t\toldestSegMaxWalSize <= targetSeg)\n>> \t\t\treturn WALAVAIL_NORMAL;\n>>\n>> This code means that wal_status reports \"normal\" only when\n>> max_slot_wal_keep_size is negative or larger than max_wal_size.\n>> Why is this condition necessary? The document explains \"normal\n>> means that the claimed files are within max_wal_size\". So whatever\n>> max_slot_wal_keep_size value is, IMO that \"normal\" should be\n>> reported if the WAL files claimed by the slot are within max_wal_size.\n>> Thought?\n> \n> It was a kind of hard to decide. Even when max_slot_wal_keep_size is\n> smaller than max_wal_size, the segments more than\n> max_slot_wal_keep_size are not guaranteed to be kept. In that case\n> the state transits as NORMAL->LOST skipping the \"RESERVED\" state.\n> Putting aside whether the setting is useful or not, I thought that the\n> state transition is somewhat abrupt.\n\nIMO the direct transition of the state from normal to lost is ok to me\nif each state is clearly defined.\n\n\n>> Or, if that condition is really necessary, the document should be\n>> updated so that the note about the condition is added.\n> \n> Does the following make sense?\n> \n> https://www.postgresql.org/docs/13/view-pg-replication-slots.html\n> \n> normal means that the claimed files are within max_wal_size.\n> + If max_slot_wal_keep_size is smaller than max_wal_size, this state\n> + will not appear.\n\nI don't think this change is enough. For example, when max_slot_wal_keep_size\nis smaller than max_wal_size and the amount of WAL files claimed by the slot\nis smaller thhan max_slot_wal_keep_size, \"reserved\" is reported. But which is\ninconsistent with the meaning of \"reserved\" in the docs.\n\nTo consider what should be reported in wal_status, could you tell me what\npurpose and how the users is expected to use this information?\n\n\n>> If the WAL files claimed by the slot exceeds max_slot_wal_keep_size\n>> but any those WAL files have not been removed yet, wal_status seems\n>> to report \"lost\". Is this expected behavior? Per the meaning of \"lost\"\n>> described in the document, \"lost\" should be reported only when\n>> any claimed files are removed, I think. Thought?\n>>\n>> Or this behavior is expected and the document is incorrect?\n> \n> In short, it is known behavior but it was judged as useless to prevent\n> that.\n> \n> That can happen when checkpointer removes up to the segment that is\n> being read by walsender. I think that that doesn't happen (or\n> happenswithin a narrow time window?) for physical replication but\n> happenes for logical replication.\n> \n> While development, I once added walsender a code to exit for that\n> reason, but finally it is moved to InvalidateObsoleteReplicationSlots\n> as a bit defferent function. With the current mechanism, there's a\n> case where once invalidated slot came to revive but we decided to\n> allow that behavior, but forgot to document that.\n> \n> Anyway if you see \"lost\", something bad is being happening.\n> \n> - lost means that some WAL files are definitely lost and this slot\n> - cannot be used to resume replication anymore.\n> + lost means that some required WAL files are removed and this slot is\n> + no longer usable after once disconnected during this status.\n> \n> If it is crucial that the \"lost\" state may come back to reserved or\n> normal state,\n> \n> + Note that there are cases where the state moves back to reserved or\n> + normal state when all wal senders have left the just removed segment\n> + before being terminated.\n> \n> There is a case where the state moves back to reserved or normal state when wal senders leaves the just removed segment before being terminated.\n\nEven if walsender is terminated during the state \"lost\", unless checkpointer\nremoves the required WAL files, the state can go back to \"reserved\" after\nnew replication connection is established. This is the same as what you're\nexplaining at the above?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 15 Jun 2020 18:59:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020/06/15 13:42, Kyotaro Horiguchi wrote:\n> At Sat, 13 Jun 2020 01:38:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Hi,\n>>\n>> The document explains that \"lost\" value that\n>> pg_replication_slots.wal_status reports means\n>>\n>> some WAL files are definitely lost and this slot cannot be used to\n>> resume replication anymore.\n>>\n>> However, I observed \"lost\" value while inserting lots of records,\n>> but replication could continue normally. So I wonder if\n>> pg_replication_slots.wal_status may have a bug.\n>>\n>> wal_status is calculated in GetWALAvailability(), and probably I found\n>> some issues in it.\n>>\n>>\n>> \tkeepSegs = ConvertToXSegs(Max(max_wal_size_mb, wal_keep_segments),\n>> \t\t\t\t\t\t\t wal_segment_size) +\n>> \t\t\t\t\t\t\t 1;\n>>\n>> max_wal_size_mb is the number of megabytes. wal_keep_segments is\n>> the number of WAL segment files. So it's strange to calculate max of\n>> them.\n> \n> Oops! I don't want to believe I did that but it's definitely wrong.\n> \n>> The above should be the following?\n>>\n>> Max(ConvertToXSegs(max_wal_size_mb, wal_segment_size),\n>> wal_keep_segments) + 1\n> \n> Looks reasonable.\n> \n>> \t\tif ((max_slot_wal_keep_size_mb <= 0 ||\n>> \t\t\t max_slot_wal_keep_size_mb >= max_wal_size_mb) &&\n>> \t\t\toldestSegMaxWalSize <= targetSeg)\n>> \t\t\treturn WALAVAIL_NORMAL;\n>>\n>> This code means that wal_status reports \"normal\" only when\n>> max_slot_wal_keep_size is negative or larger than max_wal_size.\n>> Why is this condition necessary? The document explains \"normal\n>> means that the claimed files are within max_wal_size\". So whatever\n>> max_slot_wal_keep_size value is, IMO that \"normal\" should be\n>> reported if the WAL files claimed by the slot are within max_wal_size.\n>> Thought?\n> \n> It was a kind of hard to decide. Even when max_slot_wal_keep_size is\n> smaller than max_wal_size, the segments more than\n> max_slot_wal_keep_size are not guaranteed to be kept. In that case\n> the state transits as NORMAL->LOST skipping the \"RESERVED\" state.\n> Putting aside whether the setting is useful or not, I thought that the\n> state transition is somewhat abrupt.\n> \n>> Or, if that condition is really necessary, the document should be\n>> updated so that the note about the condition is added.\n> \n> Does the following make sense?\n> \n> https://www.postgresql.org/docs/13/view-pg-replication-slots.html\n> \n> normal means that the claimed files are within max_wal_size.\n> + If max_slot_wal_keep_size is smaller than max_wal_size, this state\n> + will not appear.\n> \n>> If the WAL files claimed by the slot exceeds max_slot_wal_keep_size\n>> but any those WAL files have not been removed yet, wal_status seems\n>> to report \"lost\". Is this expected behavior? Per the meaning of \"lost\"\n>> described in the document, \"lost\" should be reported only when\n>> any claimed files are removed, I think. Thought?\n>>\n>> Or this behavior is expected and the document is incorrect?\n> \n> In short, it is known behavior but it was judged as useless to prevent\n> that.\n> \n> That can happen when checkpointer removes up to the segment that is\n> being read by walsender. I think that that doesn't happen (or\n> happenswithin a narrow time window?) for physical replication but\n> happenes for logical replication.\n> \n> While development, I once added walsender a code to exit for that\n> reason, but finally it is moved to InvalidateObsoleteReplicationSlots\n> as a bit defferent function.\n\nBTW, I read the code of InvalidateObsoleteReplicationSlots() and probably\nfound some issues in it.\n\n1. Each cycle of the \"for\" loop in InvalidateObsoleteReplicationSlots()\n emits the log message \"terminating walsender ...\". This means that\n if it takes more than 10ms for walsender to exit after it's signaled,\n the second and subsequent cycles would happen and output the same\n log message several times. IMO that log message should be output\n only once.\n\n2. InvalidateObsoleteReplicationSlots() uses the loop to scan replication\n slots array and uses the \"for\" loop in each scan. Also it calls\n ReplicationSlotAcquire() for each \"for\" loop cycle, and\n ReplicationSlotAcquire() uses another loop to scan replication slots\n array. I don't think this is good design.\n\n ISTM that we can get rid of ReplicationSlotAcquire()'s loop because\n InvalidateObsoleteReplicationSlots() already know the index of the slot\n that we want to find. The attached patch does that. Thought?\n\n3. There is a corner case where the termination of walsender cleans up\n the temporary replication slot while InvalidateObsoleteReplicationSlots()\n is sleeping on ConditionVariableTimedSleep(). In this case,\n ReplicationSlotAcquire() is called in the subsequent cycle of the \"for\"\n loop, cannot find the slot and then emits ERROR message. This leads\n to the failure of checkpoint by the checkpointer.\n\n To avoid this case, if SAB_Inquire is specified, ReplicationSlotAcquire()\n should return the special value instead of emitting ERROR even when\n it cannot find the slot. Also InvalidateObsoleteReplicationSlots() should\n handle that special returned value.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 16 Jun 2020 01:46:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Mon, 15 Jun 2020 18:59:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > It was a kind of hard to decide. Even when max_slot_wal_keep_size is\n> > smaller than max_wal_size, the segments more than\n> > max_slot_wal_keep_size are not guaranteed to be kept. In that case\n> > the state transits as NORMAL->LOST skipping the \"RESERVED\" state.\n> > Putting aside whether the setting is useful or not, I thought that the\n> > state transition is somewhat abrupt.\n> \n> IMO the direct transition of the state from normal to lost is ok to me\n> if each state is clearly defined.\n> \n> >> Or, if that condition is really necessary, the document should be\n> >> updated so that the note about the condition is added.\n> > Does the following make sense?\n> > https://www.postgresql.org/docs/13/view-pg-replication-slots.html\n> > normal means that the claimed files are within max_wal_size.\n> > + If max_slot_wal_keep_size is smaller than max_wal_size, this state\n> > + will not appear.\n> \n> I don't think this change is enough. For example, when\n> max_slot_wal_keep_size\n> is smaller than max_wal_size and the amount of WAL files claimed by\n> the slot\n> is smaller thhan max_slot_wal_keep_size, \"reserved\" is reported. But\n> which is\n> inconsistent with the meaning of \"reserved\" in the docs.\n\nYou're right.\n\n> To consider what should be reported in wal_status, could you tell me\n> what\n> purpose and how the users is expected to use this information?\n\nI saw that the \"reserved\" is the state where slots are working to\nretain segments, and \"normal\" is the state to indicate that \"WAL\nsegments are within max_wal_size\", which is orthogonal to the notion\nof \"reserved\". So it seems to me useless when the retained WAL\nsegments cannot exceeds max_wal_size.\n\nWith longer description they would be:\n\n\"reserved under max_wal_size\"\n\"reserved over max_wal_size\"\n\"lost some segements\"\n\nCome to think of that, I realized that my trouble was just the\nwording. Are the following wordings make sense to you?\n\n\"reserved\" - retained within max_wal_size\n\"extended\" - retained over max_wal_size\n\"lost\" - lost some segments\n\nWith these wordings I can live with \"not extended\"=>\"lost\". Of course\nmore appropriate wording are welcome.\n\n> Even if walsender is terminated during the state \"lost\", unless\n> checkpointer\n> removes the required WAL files, the state can go back to \"reserved\"\n> after\n> new replication connection is established. This is the same as what\n> you're\n> explaining at the above?\n\nGetWALAvailability checks restart_lsn against lastRemovedSegNo, thus\nthe \"lost\" cannot be seen unless checkpointer actually have removed\nthe segment at restart_lsn (and restart_lsn has not been invalidated).\nHowever, walsenders are killed before that segments are actually\nremoved so there're cases where physical walreceiver reconnects before\nRemoveOldXloFiles removes all segments, then removed after\nreconnection. \"lost\" can go back to \"resrved\" in that case. (Physical\nwalreceiver can connect to invalid-restart_lsn slot)\n\nI noticed the another issue. If some required WALs are removed, the\nslot will be \"invalidated\", that is, restart_lsn is set to invalid\nvalue. As the result we hardly see the \"lost\" state.\n\nIt can be \"fixed\" by remembering the validity of a slot separately\nfrom restart_lsn. Is that worth doing?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 16 Jun 2020 12:02:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Tue, 16 Jun 2020 01:46:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > In short, it is known behavior but it was judged as useless to prevent\n> > that.\n> > That can happen when checkpointer removes up to the segment that is\n> > being read by walsender. I think that that doesn't happen (or\n> > happenswithin a narrow time window?) for physical replication but\n> > happenes for logical replication.\n> > While development, I once added walsender a code to exit for that\n> > reason, but finally it is moved to InvalidateObsoleteReplicationSlots\n> > as a bit defferent function.\n> \n> BTW, I read the code of InvalidateObsoleteReplicationSlots() and\n> probably\n> found some issues in it.\n> \n> 1. Each cycle of the \"for\" loop in\n> InvalidateObsoleteReplicationSlots()\n> emits the log message \"terminating walsender ...\". This means that\n> if it takes more than 10ms for walsender to exit after it's signaled,\n> the second and subsequent cycles would happen and output the same\n> log message several times. IMO that log message should be output\n> only once.\n\nSounds reasonable.\n\n> 2. InvalidateObsoleteReplicationSlots() uses the loop to scan\n> replication\t\t\t\t\t\t\t \n> slots array and uses the \"for\" loop in each scan. Also it calls\n> ReplicationSlotAcquire() for each \"for\" loop cycle, and\n> ReplicationSlotAcquire() uses another loop to scan replication slots\n> array. I don't think this is good design.\n> \n> ISTM that we can get rid of ReplicationSlotAcquire()'s loop because\n> InvalidateObsoleteReplicationSlots() already know the index of the\n> slot\n> that we want to find. The attached patch does that. Thought?\n\nThe inner loop is expected to run at most several times per\ncheckpoint, which won't be a serious problem. However, it is better if\nwe can get rid of that in a reasonable way.\n\nThe attached patch changes the behavior for SAB_Block. Before the\npatch, it rescans from the first slot for the same name, but with the\npatch it just rechecks the same slot. The only caller of the function\nwith SAB_Block is ReplicationSlotDrop and I don't come up with a case\nwhere another slot with the same name is created at different place\nbefore the condition variable fires. But I'm not sure the change is\ncompletely safe. Maybe some assertion is needed?\n\n> 3. There is a corner case where the termination of walsender cleans up\n> the temporary replication slot while\n> InvalidateObsoleteReplicationSlots()\n> is sleeping on ConditionVariableTimedSleep(). In this case,\n> ReplicationSlotAcquire() is called in the subsequent cycle of the\n> \"for\"\n> loop, cannot find the slot and then emits ERROR message. This leads\n> to the failure of checkpoint by the checkpointer.\n\nAgreed.\n\n> To avoid this case, if SAB_Inquire is specified,\n> ReplicationSlotAcquire()\n> should return the special value instead of emitting ERROR even when\n> it cannot find the slot. Also InvalidateObsoleteReplicationSlots()\n> should\n> handle that special returned value.\n\nI thought the same thing hearing that. \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 16 Jun 2020 14:00:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020/06/16 14:00, Kyotaro Horiguchi wrote:\n> At Tue, 16 Jun 2020 01:46:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> In short, it is known behavior but it was judged as useless to prevent\n>>> that.\n>>> That can happen when checkpointer removes up to the segment that is\n>>> being read by walsender. I think that that doesn't happen (or\n>>> happenswithin a narrow time window?) for physical replication but\n>>> happenes for logical replication.\n>>> While development, I once added walsender a code to exit for that\n>>> reason, but finally it is moved to InvalidateObsoleteReplicationSlots\n>>> as a bit defferent function.\n>>\n>> BTW, I read the code of InvalidateObsoleteReplicationSlots() and\n>> probably\n>> found some issues in it.\n>>\n>> 1. Each cycle of the \"for\" loop in\n>> InvalidateObsoleteReplicationSlots()\n>> emits the log message \"terminating walsender ...\". This means that\n>> if it takes more than 10ms for walsender to exit after it's signaled,\n>> the second and subsequent cycles would happen and output the same\n>> log message several times. IMO that log message should be output\n>> only once.\n> \n> Sounds reasonable.\n> \n>> 2. InvalidateObsoleteReplicationSlots() uses the loop to scan\n>> replication\t\t\t\t\t\t\t\n>> slots array and uses the \"for\" loop in each scan. Also it calls\n>> ReplicationSlotAcquire() for each \"for\" loop cycle, and\n>> ReplicationSlotAcquire() uses another loop to scan replication slots\n>> array. I don't think this is good design.\n>>\n>> ISTM that we can get rid of ReplicationSlotAcquire()'s loop because\n>> InvalidateObsoleteReplicationSlots() already know the index of the\n>> slot\n>> that we want to find. The attached patch does that. Thought?\n> \n> The inner loop is expected to run at most several times per\n> checkpoint, which won't be a serious problem. However, it is better if\n> we can get rid of that in a reasonable way.\n> \n> The attached patch changes the behavior for SAB_Block. Before the\n> patch, it rescans from the first slot for the same name, but with the\n> patch it just rechecks the same slot. The only caller of the function\n> with SAB_Block is ReplicationSlotDrop and I don't come up with a case\n> where another slot with the same name is created at different place\n> before the condition variable fires. But I'm not sure the change is\n> completely safe.\n\nYes, that change might not be safe. So I'm thinking another approach to\nfix the issues.\n\n> Maybe some assertion is needed?\n> \n>> 3. There is a corner case where the termination of walsender cleans up\n>> the temporary replication slot while\n>> InvalidateObsoleteReplicationSlots()\n>> is sleeping on ConditionVariableTimedSleep(). In this case,\n>> ReplicationSlotAcquire() is called in the subsequent cycle of the\n>> \"for\"\n>> loop, cannot find the slot and then emits ERROR message. This leads\n>> to the failure of checkpoint by the checkpointer.\n> \n> Agreed.\n> \n>> To avoid this case, if SAB_Inquire is specified,\n>> ReplicationSlotAcquire()\n>> should return the special value instead of emitting ERROR even when\n>> it cannot find the slot. Also InvalidateObsoleteReplicationSlots()\n>> should\n>> handle that special returned value.\n> \n> I thought the same thing hearing that.\n\nWhile reading InvalidateObsoleteReplicationSlots() code, I found another issue.\n\n\t\t\tereport(LOG,\n\t\t\t\t\t(errmsg(\"terminating walsender %d because replication slot \\\"%s\\\" is too far behind\",\n\t\t\t\t\t\t\twspid, NameStr(slotname))));\n\t\t\t(void) kill(wspid, SIGTERM);\n\nwspid indicates the PID of process using the slot. That process can be\na backend, for example, executing pg_replication_slot_advance().\nSo \"walsender\" in the above log message is not always correct.\n\n\n\n\t\t\tint\t\t\twspid = ReplicationSlotAcquire(NameStr(slotname),\n\t\t\t\t\t\t\t\t\t\t\t\t\t SAB_Inquire);\n\nWhy do we need to call ReplicationSlotAcquire() here and mark the slot as\nused by the checkpointer? Isn't it enough to check directly the slot's\nactive_pid, instead?\n\nMaybe ReplicationSlotAcquire() is necessary because\nReplicationSlotRelease() is called later? If so, why do we need to call\nReplicationSlotRelease()? ISTM that we don't need to do that if the slot's\nactive_pid is zero. No?\n\nIf my understanding is right, I'd like to propose the attached patch.\nIt introduces DeactivateReplicationSlot() and replace the \"for\" loop\nin InvalidateObsoleteReplicationSlots() with it. ReplicationSlotAcquire()\nand ReplicationSlotRelease() are no longer called there.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 17 Jun 2020 00:46:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020-Jun-16, Kyotaro Horiguchi wrote:\n\n> I noticed the another issue. If some required WALs are removed, the\n> slot will be \"invalidated\", that is, restart_lsn is set to invalid\n> value. As the result we hardly see the \"lost\" state.\n> \n> It can be \"fixed\" by remembering the validity of a slot separately\n> from restart_lsn. Is that worth doing?\n\nWe discussed this before. I agree it would be better to do this\nin some way, but I fear that if we do it naively, some code might exist\nthat reads the LSN without realizing that it needs to check the validity\nflag first.\n\nOn the other hand, maybe this is not a problem in practice, because if\nsuch a bug occurs, what will happen is that trying to read WAL from such\na slot will return the error message that the WAL file cannot be found.\nMaybe this is acceptable?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 14:31:43 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020-Jun-17, Fujii Masao wrote:\n\n> While reading InvalidateObsoleteReplicationSlots() code, I found another issue.\n> \n> \t\t\tereport(LOG,\n> \t\t\t\t\t(errmsg(\"terminating walsender %d because replication slot \\\"%s\\\" is too far behind\",\n> \t\t\t\t\t\t\twspid, NameStr(slotname))));\n> \t\t\t(void) kill(wspid, SIGTERM);\n> \n> wspid indicates the PID of process using the slot. That process can be\n> a backend, for example, executing pg_replication_slot_advance().\n> So \"walsender\" in the above log message is not always correct.\n\nGood point.\n\n> \t\t\tint\t\t\twspid = ReplicationSlotAcquire(NameStr(slotname),\n> \t\t\t\t\t\t\t\t\t\t\t\t\t SAB_Inquire);\n> \n> Why do we need to call ReplicationSlotAcquire() here and mark the slot as\n> used by the checkpointer? Isn't it enough to check directly the slot's\n> active_pid, instead?\n> \n> Maybe ReplicationSlotAcquire() is necessary because\n> ReplicationSlotRelease() is called later? If so, why do we need to call\n> ReplicationSlotRelease()? ISTM that we don't need to do that if the slot's\n> active_pid is zero. No?\n\nI think the point here was that in order to modify the slot you have to\nacquire it -- it's not valid to modify a slot you don't own.\n\n\n> +\t\t/*\n> +\t\t * Signal to terminate the process using the replication slot.\n> +\t\t *\n> +\t\t * Try to signal every 100ms until it succeeds.\n> +\t\t */\n> +\t\tif (!killed && kill(active_pid, SIGTERM) == 0)\n> +\t\t\tkilled = true;\n> +\t\tConditionVariableTimedSleep(&slot->active_cv, 100,\n> +\t\t\t\t\t\t\t\t\tWAIT_EVENT_REPLICATION_SLOT_DROP);\n> +\t} while (ReplicationSlotIsActive(slot, NULL));\n\nNote that here you're signalling only once and then sleeping many times\nin increments of 100ms -- you're not signalling every 100ms as the\ncomment claims -- unless the signal fails, but you don't really expect\nthat. On the contrary, I'd claim that the logic is reversed: if the\nsignal fails, *then* you should stop signalling.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 14:50:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Wed, 17 Jun 2020 00:46:38 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> >> 2. InvalidateObsoleteReplicationSlots() uses the loop to scan\n> >> replication\t\t\t\t\t\t\t\n> >> slots array and uses the \"for\" loop in each scan. Also it calls\n> >> ReplicationSlotAcquire() for each \"for\" loop cycle, and\n> >> ReplicationSlotAcquire() uses another loop to scan replication slots\n> >> array. I don't think this is good design.\n> >>\n> >> ISTM that we can get rid of ReplicationSlotAcquire()'s loop because\n> >> InvalidateObsoleteReplicationSlots() already know the index of the\n> >> slot\n> >> that we want to find. The attached patch does that. Thought?\n> > The inner loop is expected to run at most several times per\n> > checkpoint, which won't be a serious problem. However, it is better if\n> > we can get rid of that in a reasonable way.\n> > The attached patch changes the behavior for SAB_Block. Before the\n> > patch, it rescans from the first slot for the same name, but with the\n> > patch it just rechecks the same slot. The only caller of the function\n> > with SAB_Block is ReplicationSlotDrop and I don't come up with a case\n> > where another slot with the same name is created at different place\n> > before the condition variable fires. But I'm not sure the change is\n> > completely safe.\n> \n> Yes, that change might not be safe. So I'm thinking another approach\n> to\n> fix the issues.\n> \n> > Maybe some assertion is needed?\n> > \n> >> 3. There is a corner case where the termination of walsender cleans up\n> >> the temporary replication slot while\n> >> InvalidateObsoleteReplicationSlots()\n> >> is sleeping on ConditionVariableTimedSleep(). In this case,\n> >> ReplicationSlotAcquire() is called in the subsequent cycle of the\n> >> \"for\"\n> >> loop, cannot find the slot and then emits ERROR message. This leads\n> >> to the failure of checkpoint by the checkpointer.\n> > Agreed.\n> > \n> >> To avoid this case, if SAB_Inquire is specified,\n> >> ReplicationSlotAcquire()\n> >> should return the special value instead of emitting ERROR even when\n> >> it cannot find the slot. Also InvalidateObsoleteReplicationSlots()\n> >> should\n> >> handle that special returned value.\n> > I thought the same thing hearing that.\n> \n> While reading InvalidateObsoleteReplicationSlots() code, I found\n> another issue.\n> \n> \t\t\tereport(LOG,\n> \t\t\t\t\t(errmsg(\"terminating walsender %d\n> \t\t\t\t\tbecause replication slot \\\"%s\\\" is too\n> \t\t\t\t\tfar behind\",\n> \t\t\t\t\t\t\twspid,\n> \t\t\t\t\t\t\tNameStr(slotname))));\n> \t\t\t(void) kill(wspid, SIGTERM);\n> \n> wspid indicates the PID of process using the slot. That process can be\n> a backend, for example, executing pg_replication_slot_advance().\n> So \"walsender\" in the above log message is not always correct.\n\nAgreed.\n\n> \n> \t\t\tint wspid = ReplicationSlotAcquire(NameStr(slotname),\n> \t\t\t\t\t\t\t\t\t\t\t\t\t SAB_Inquire);\n> \n> Why do we need to call ReplicationSlotAcquire() here and mark the slot\n> as\n> used by the checkpointer? Isn't it enough to check directly the slot's\n> active_pid, instead?\n> Maybe ReplicationSlotAcquire() is necessary because\n> ReplicationSlotRelease() is called later? If so, why do we need to\n> call\n> ReplicationSlotRelease()? ISTM that we don't need to do that if the\n> slot's\n> active_pid is zero. No?\n\nMy understanding of the reason is that we update a slot value here.\nThe restriction allows the owner of a slot to assume that all the slot\nvalues don't voluntarily change.\n\nslot.h:104\n| * - Individual fields are protected by mutex where only the backend owning\n| * the slot is authorized to update the fields from its own slot. The\n| * backend owning the slot does not need to take this lock when reading its\n| * own fields, while concurrent backends not owning this slot should take the\n| * lock when reading this slot's data.\n\n> If my understanding is right, I'd like to propose the attached patch.\n> It introduces DeactivateReplicationSlot() and replace the \"for\" loop\n> in InvalidateObsoleteReplicationSlots() with\n> it. ReplicationSlotAcquire()\n> and ReplicationSlotRelease() are no longer called there.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:01:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Tue, 16 Jun 2020 14:31:43 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jun-16, Kyotaro Horiguchi wrote:\n> \n> > I noticed the another issue. If some required WALs are removed, the\n> > slot will be \"invalidated\", that is, restart_lsn is set to invalid\n> > value. As the result we hardly see the \"lost\" state.\n> > \n> > It can be \"fixed\" by remembering the validity of a slot separately\n> > from restart_lsn. Is that worth doing?\n> \n> We discussed this before. I agree it would be better to do this\n> in some way, but I fear that if we do it naively, some code might exist\n> that reads the LSN without realizing that it needs to check the validity\n> flag first.\n\nYes, that was my main concern on it. That's error-prone. How about\nremembering the LSN where invalidation happened? It's safe since no\nothers than slot-monitoring functions would look\nlast_invalidated_lsn. It can be reset if active_pid is a valid pid.\n\nInvalidateObsoleteReplicationSlots:\n ...\n \t\tSpinLockAcquire(&s->mutex);\n+\t\ts->data.last_invalidated_lsn = s->data.restart_lsn;\n \t\ts->data.restart_lsn = InvalidXLogRecPtr;\n \t\tSpinLockRelease(&s->mutex);\n\n> On the other hand, maybe this is not a problem in practice, because if\n> such a bug occurs, what will happen is that trying to read WAL from such\n> a slot will return the error message that the WAL file cannot be found.\n> Maybe this is acceptable?\n\nI'm not sure. For my part a problem of that would we need to look\ninto server logs to know what is acutally going on.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:17:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "\n\nOn 2020/06/17 3:50, Alvaro Herrera wrote:\n> On 2020-Jun-17, Fujii Masao wrote:\n> \n>> While reading InvalidateObsoleteReplicationSlots() code, I found another issue.\n>>\n>> \t\t\tereport(LOG,\n>> \t\t\t\t\t(errmsg(\"terminating walsender %d because replication slot \\\"%s\\\" is too far behind\",\n>> \t\t\t\t\t\t\twspid, NameStr(slotname))));\n>> \t\t\t(void) kill(wspid, SIGTERM);\n>>\n>> wspid indicates the PID of process using the slot. That process can be\n>> a backend, for example, executing pg_replication_slot_advance().\n>> So \"walsender\" in the above log message is not always correct.\n> \n> Good point.\n\nSo InvalidateObsoleteReplicationSlots() can terminate normal backends.\nBut do we want to do this? If we want, we should add the note about this\ncase into the docs? Otherwise the users would be surprised at termination\nof backends by max_slot_wal_keep_size. I guess that it's basically rarely\nhappen, though.\n\n\n>> \t\t\tint\t\t\twspid = ReplicationSlotAcquire(NameStr(slotname),\n>> \t\t\t\t\t\t\t\t\t\t\t\t\t SAB_Inquire);\n>>\n>> Why do we need to call ReplicationSlotAcquire() here and mark the slot as\n>> used by the checkpointer? Isn't it enough to check directly the slot's\n>> active_pid, instead?\n>>\n>> Maybe ReplicationSlotAcquire() is necessary because\n>> ReplicationSlotRelease() is called later? If so, why do we need to call\n>> ReplicationSlotRelease()? ISTM that we don't need to do that if the slot's\n>> active_pid is zero. No?\n> \n> I think the point here was that in order to modify the slot you have to\n> acquire it -- it's not valid to modify a slot you don't own.\n\nUnderstood. Thanks!\n\n\n>> +\t\t/*\n>> +\t\t * Signal to terminate the process using the replication slot.\n>> +\t\t *\n>> +\t\t * Try to signal every 100ms until it succeeds.\n>> +\t\t */\n>> +\t\tif (!killed && kill(active_pid, SIGTERM) == 0)\n>> +\t\t\tkilled = true;\n>> +\t\tConditionVariableTimedSleep(&slot->active_cv, 100,\n>> +\t\t\t\t\t\t\t\t\tWAIT_EVENT_REPLICATION_SLOT_DROP);\n>> +\t} while (ReplicationSlotIsActive(slot, NULL));\n> \n> Note that here you're signalling only once and then sleeping many times\n> in increments of 100ms -- you're not signalling every 100ms as the\n> comment claims -- unless the signal fails, but you don't really expect\n> that. On the contrary, I'd claim that the logic is reversed: if the\n> signal fails, *then* you should stop signalling.\n\nYou mean; in this code path, signaling fails only when the target process\ndisappears just before signaling. So if it fails, slot->active_pid is\nexpected to become 0 even without signaling more. Right?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 17 Jun 2020 11:30:41 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020-Jun-17, Fujii Masao wrote:\n> On 2020/06/17 3:50, Alvaro Herrera wrote:\n\n> So InvalidateObsoleteReplicationSlots() can terminate normal backends.\n> But do we want to do this? If we want, we should add the note about this\n> case into the docs? Otherwise the users would be surprised at termination\n> of backends by max_slot_wal_keep_size. I guess that it's basically rarely\n> happen, though.\n\nWell, if we could distinguish a walsender from a non-walsender process,\nthen maybe it would make sense to leave backends alive. But do we want\nthat? I admit I don't know what would be the reason to have a\nnon-walsender process with an active slot, so I don't have a good\nopinion on what to do in this case.\n\n> > > +\t\t/*\n> > > +\t\t * Signal to terminate the process using the replication slot.\n> > > +\t\t *\n> > > +\t\t * Try to signal every 100ms until it succeeds.\n> > > +\t\t */\n> > > +\t\tif (!killed && kill(active_pid, SIGTERM) == 0)\n> > > +\t\t\tkilled = true;\n> > > +\t\tConditionVariableTimedSleep(&slot->active_cv, 100,\n> > > +\t\t\t\t\t\t\t\t\tWAIT_EVENT_REPLICATION_SLOT_DROP);\n> > > +\t} while (ReplicationSlotIsActive(slot, NULL));\n> > \n> > Note that here you're signalling only once and then sleeping many times\n> > in increments of 100ms -- you're not signalling every 100ms as the\n> > comment claims -- unless the signal fails, but you don't really expect\n> > that. On the contrary, I'd claim that the logic is reversed: if the\n> > signal fails, *then* you should stop signalling.\n> \n> You mean; in this code path, signaling fails only when the target process\n> disappears just before signaling. So if it fails, slot->active_pid is\n> expected to become 0 even without signaling more. Right?\n\nI guess kill() can also fail if the PID now belongs to a process owned\nby a different user. I think we've disregarded very quick reuse of\nPIDs, so we needn't concern ourselves with it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 22:40:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Tue, 16 Jun 2020 22:40:56 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jun-17, Fujii Masao wrote:\n> > On 2020/06/17 3:50, Alvaro Herrera wrote:\n> \n> > So InvalidateObsoleteReplicationSlots() can terminate normal backends.\n> > But do we want to do this? If we want, we should add the note about this\n> > case into the docs? Otherwise the users would be surprised at termination\n> > of backends by max_slot_wal_keep_size. I guess that it's basically rarely\n> > happen, though.\n> \n> Well, if we could distinguish a walsender from a non-walsender process,\n> then maybe it would make sense to leave backends alive. But do we want\n> that? I admit I don't know what would be the reason to have a\n> non-walsender process with an active slot, so I don't have a good\n> opinion on what to do in this case.\n\nThe non-walsender backend is actually doing replication work. It\nrather should be killed?\n\n> > > > +\t\t/*\n> > > > +\t\t * Signal to terminate the process using the replication slot.\n> > > > +\t\t *\n> > > > +\t\t * Try to signal every 100ms until it succeeds.\n> > > > +\t\t */\n> > > > +\t\tif (!killed && kill(active_pid, SIGTERM) == 0)\n> > > > +\t\t\tkilled = true;\n> > > > +\t\tConditionVariableTimedSleep(&slot->active_cv, 100,\n> > > > +\t\t\t\t\t\t\t\t\tWAIT_EVENT_REPLICATION_SLOT_DROP);\n> > > > +\t} while (ReplicationSlotIsActive(slot, NULL));\n> > > \n> > > Note that here you're signalling only once and then sleeping many times\n> > > in increments of 100ms -- you're not signalling every 100ms as the\n> > > comment claims -- unless the signal fails, but you don't really expect\n> > > that. On the contrary, I'd claim that the logic is reversed: if the\n> > > signal fails, *then* you should stop signalling.\n> > \n> > You mean; in this code path, signaling fails only when the target process\n> > disappears just before signaling. So if it fails, slot->active_pid is\n> > expected to become 0 even without signaling more. Right?\n> \n> I guess kill() can also fail if the PID now belongs to a process owned\n> by a different user. I think we've disregarded very quick reuse of\n> PIDs, so we needn't concern ourselves with it.\n\nThe first time call to ConditionVariableTimedSleep doen't actually\nsleep, so the loop works as expected. But we may make an extra call\nto kill(2). Calling ConditionVariablePrepareToSleep beforehand of the\nloop would make it better.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:10:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Wed, 17 Jun 2020 10:17:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 16 Jun 2020 14:31:43 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> > On 2020-Jun-16, Kyotaro Horiguchi wrote:\n> > \n> > > I noticed the another issue. If some required WALs are removed, the\n> > > slot will be \"invalidated\", that is, restart_lsn is set to invalid\n> > > value. As the result we hardly see the \"lost\" state.\n> > > \n> > > It can be \"fixed\" by remembering the validity of a slot separately\n> > > from restart_lsn. Is that worth doing?\n> > \n> > We discussed this before. I agree it would be better to do this\n> > in some way, but I fear that if we do it naively, some code might exist\n> > that reads the LSN without realizing that it needs to check the validity\n> > flag first.\n> \n> Yes, that was my main concern on it. That's error-prone. How about\n> remembering the LSN where invalidation happened? It's safe since no\n> others than slot-monitoring functions would look\n> last_invalidated_lsn. It can be reset if active_pid is a valid pid.\n> \n> InvalidateObsoleteReplicationSlots:\n> ...\n> \t\tSpinLockAcquire(&s->mutex);\n> +\t\ts->data.last_invalidated_lsn = s->data.restart_lsn;\n> \t\ts->data.restart_lsn = InvalidXLogRecPtr;\n> \t\tSpinLockRelease(&s->mutex);\n\nThe attached does that (Poc). No document fix included.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 17 Jun 2020 13:56:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020/06/17 12:10, Kyotaro Horiguchi wrote:\n> At Tue, 16 Jun 2020 22:40:56 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in\n>> On 2020-Jun-17, Fujii Masao wrote:\n>>> On 2020/06/17 3:50, Alvaro Herrera wrote:\n>>\n>>> So InvalidateObsoleteReplicationSlots() can terminate normal backends.\n>>> But do we want to do this? If we want, we should add the note about this\n>>> case into the docs? Otherwise the users would be surprised at termination\n>>> of backends by max_slot_wal_keep_size. I guess that it's basically rarely\n>>> happen, though.\n>>\n>> Well, if we could distinguish a walsender from a non-walsender process,\n>> then maybe it would make sense to leave backends alive. But do we want\n>> that? I admit I don't know what would be the reason to have a\n>> non-walsender process with an active slot, so I don't have a good\n>> opinion on what to do in this case.\n> \n> The non-walsender backend is actually doing replication work. It\n> rather should be killed?\n\nI have no better opinion about this. So I agree to leave the logic as it is\nat least for now, i.e., we terminate the process owning the slot whatever\nthe type of process is.\n\n> \n>>>>> +\t\t/*\n>>>>> +\t\t * Signal to terminate the process using the replication slot.\n>>>>> +\t\t *\n>>>>> +\t\t * Try to signal every 100ms until it succeeds.\n>>>>> +\t\t */\n>>>>> +\t\tif (!killed && kill(active_pid, SIGTERM) == 0)\n>>>>> +\t\t\tkilled = true;\n>>>>> +\t\tConditionVariableTimedSleep(&slot->active_cv, 100,\n>>>>> +\t\t\t\t\t\t\t\t\tWAIT_EVENT_REPLICATION_SLOT_DROP);\n>>>>> +\t} while (ReplicationSlotIsActive(slot, NULL));\n>>>>\n>>>> Note that here you're signalling only once and then sleeping many times\n>>>> in increments of 100ms -- you're not signalling every 100ms as the\n>>>> comment claims -- unless the signal fails, but you don't really expect\n>>>> that. On the contrary, I'd claim that the logic is reversed: if the\n>>>> signal fails, *then* you should stop signalling.\n>>>\n>>> You mean; in this code path, signaling fails only when the target process\n>>> disappears just before signaling. So if it fails, slot->active_pid is\n>>> expected to become 0 even without signaling more. Right?\n>>\n>> I guess kill() can also fail if the PID now belongs to a process owned\n>> by a different user.\n\nYes. This case means that the PostgreSQL process using the slot disappeared\nand the same PID was assigned to non-PostgreSQL process. So if kill() fails\nfor this reason, we don't need to kill() again.\n\n> I think we've disregarded very quick reuse of\n>> PIDs, so we needn't concern ourselves with it.\n> \n> The first time call to ConditionVariableTimedSleep doen't actually\n> sleep, so the loop works as expected. But we may make an extra call\n> to kill(2). Calling ConditionVariablePrepareToSleep beforehand of the\n> loop would make it better.\n\nSorry I failed to understand your point...\n\nAnyway, the attached is the updated version of the patch. This fixes\nall the issues in InvalidateObsoleteReplicationSlots() that I reported\nupthread.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 17 Jun 2020 17:01:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Wed, 17 Jun 2020 17:01:11 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/06/17 12:10, Kyotaro Horiguchi wrote:\n> > At Tue, 16 Jun 2020 22:40:56 -0400, Alvaro Herrera\n> > <alvherre@2ndquadrant.com> wrote in\n> >> On 2020-Jun-17, Fujii Masao wrote:\n> >>> On 2020/06/17 3:50, Alvaro Herrera wrote:\n> >>\n> >>> So InvalidateObsoleteReplicationSlots() can terminate normal backends.\n> >>> But do we want to do this? If we want, we should add the note about\n> >>> this\n> >>> case into the docs? Otherwise the users would be surprised at\n> >>> termination\n> >>> of backends by max_slot_wal_keep_size. I guess that it's basically\n> >>> rarely\n> >>> happen, though.\n> >>\n> >> Well, if we could distinguish a walsender from a non-walsender\n> >> process,\n> >> then maybe it would make sense to leave backends alive. But do we\n> >> want\n> >> that? I admit I don't know what would be the reason to have a\n> >> non-walsender process with an active slot, so I don't have a good\n> >> opinion on what to do in this case.\n> > The non-walsender backend is actually doing replication work. It\n> > rather should be killed?\n> \n> I have no better opinion about this. So I agree to leave the logic as\n> it is\n> at least for now, i.e., we terminate the process owning the slot\n> whatever\n> the type of process is.\n\nAgreed.\n\n> >>>>> +\t\t/*\n> >>>>> +\t\t * Signal to terminate the process using the replication slot.\n> >>>>> +\t\t *\n> >>>>> +\t\t * Try to signal every 100ms until it succeeds.\n> >>>>> +\t\t */\n> >>>>> +\t\tif (!killed && kill(active_pid, SIGTERM) == 0)\n> >>>>> +\t\t\tkilled = true;\n> >>>>> +\t\tConditionVariableTimedSleep(&slot->active_cv, 100,\n> >>>>> +\t\t\t\t\t\t\t\t\tWAIT_EVENT_REPLICATION_SLOT_DROP);\n> >>>>> +\t} while (ReplicationSlotIsActive(slot, NULL));\n> >>>>\n> >>>> Note that here you're signalling only once and then sleeping many\n> >>>> times\n> >>>> in increments of 100ms -- you're not signalling every 100ms as the\n> >>>> comment claims -- unless the signal fails, but you don't really expect\n> >>>> that. On the contrary, I'd claim that the logic is reversed: if the\n> >>>> signal fails, *then* you should stop signalling.\n> >>>\n> >>> You mean; in this code path, signaling fails only when the target\n> >>> process\n> >>> disappears just before signaling. So if it fails, slot->active_pid is\n> >>> expected to become 0 even without signaling more. Right?\n> >>\n> >> I guess kill() can also fail if the PID now belongs to a process owned\n> >> by a different user.\n> \n> Yes. This case means that the PostgreSQL process using the slot\n> disappeared\n> and the same PID was assigned to non-PostgreSQL process. So if kill()\n> fails\n> for this reason, we don't need to kill() again.\n> \n> > I think we've disregarded very quick reuse of\n> >> PIDs, so we needn't concern ourselves with it.\n> > The first time call to ConditionVariableTimedSleep doen't actually\n> > sleep, so the loop works as expected. But we may make an extra call\n> > to kill(2). Calling ConditionVariablePrepareToSleep beforehand of the\n> > loop would make it better.\n> \n> Sorry I failed to understand your point...\n\nMy point is the ConditionVariableTimedSleep does *not* sleep on the CV\nfirst time in this usage. The new version anyway avoids useless\nkill(2) call, but still may make an extra call to\nReplicationSlotAcquireInternal. I think we should call\nConditionVariablePrepareToSleep before the sorrounding for statement\nblock.\n\n> Anyway, the attached is the updated version of the patch. This fixes\n> all the issues in InvalidateObsoleteReplicationSlots() that I reported\n> upthread.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 17 Jun 2020 17:30:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020/06/17 17:30, Kyotaro Horiguchi wrote:\n> At Wed, 17 Jun 2020 17:01:11 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/06/17 12:10, Kyotaro Horiguchi wrote:\n>>> At Tue, 16 Jun 2020 22:40:56 -0400, Alvaro Herrera\n>>> <alvherre@2ndquadrant.com> wrote in\n>>>> On 2020-Jun-17, Fujii Masao wrote:\n>>>>> On 2020/06/17 3:50, Alvaro Herrera wrote:\n>>>>\n>>>>> So InvalidateObsoleteReplicationSlots() can terminate normal backends.\n>>>>> But do we want to do this? If we want, we should add the note about\n>>>>> this\n>>>>> case into the docs? Otherwise the users would be surprised at\n>>>>> termination\n>>>>> of backends by max_slot_wal_keep_size. I guess that it's basically\n>>>>> rarely\n>>>>> happen, though.\n>>>>\n>>>> Well, if we could distinguish a walsender from a non-walsender\n>>>> process,\n>>>> then maybe it would make sense to leave backends alive. But do we\n>>>> want\n>>>> that? I admit I don't know what would be the reason to have a\n>>>> non-walsender process with an active slot, so I don't have a good\n>>>> opinion on what to do in this case.\n>>> The non-walsender backend is actually doing replication work. It\n>>> rather should be killed?\n>>\n>> I have no better opinion about this. So I agree to leave the logic as\n>> it is\n>> at least for now, i.e., we terminate the process owning the slot\n>> whatever\n>> the type of process is.\n> \n> Agreed.\n> \n>>>>>>> +\t\t/*\n>>>>>>> +\t\t * Signal to terminate the process using the replication slot.\n>>>>>>> +\t\t *\n>>>>>>> +\t\t * Try to signal every 100ms until it succeeds.\n>>>>>>> +\t\t */\n>>>>>>> +\t\tif (!killed && kill(active_pid, SIGTERM) == 0)\n>>>>>>> +\t\t\tkilled = true;\n>>>>>>> +\t\tConditionVariableTimedSleep(&slot->active_cv, 100,\n>>>>>>> +\t\t\t\t\t\t\t\t\tWAIT_EVENT_REPLICATION_SLOT_DROP);\n>>>>>>> +\t} while (ReplicationSlotIsActive(slot, NULL));\n>>>>>>\n>>>>>> Note that here you're signalling only once and then sleeping many\n>>>>>> times\n>>>>>> in increments of 100ms -- you're not signalling every 100ms as the\n>>>>>> comment claims -- unless the signal fails, but you don't really expect\n>>>>>> that. On the contrary, I'd claim that the logic is reversed: if the\n>>>>>> signal fails, *then* you should stop signalling.\n>>>>>\n>>>>> You mean; in this code path, signaling fails only when the target\n>>>>> process\n>>>>> disappears just before signaling. So if it fails, slot->active_pid is\n>>>>> expected to become 0 even without signaling more. Right?\n>>>>\n>>>> I guess kill() can also fail if the PID now belongs to a process owned\n>>>> by a different user.\n>>\n>> Yes. This case means that the PostgreSQL process using the slot\n>> disappeared\n>> and the same PID was assigned to non-PostgreSQL process. So if kill()\n>> fails\n>> for this reason, we don't need to kill() again.\n>>\n>>> I think we've disregarded very quick reuse of\n>>>> PIDs, so we needn't concern ourselves with it.\n>>> The first time call to ConditionVariableTimedSleep doen't actually\n>>> sleep, so the loop works as expected. But we may make an extra call\n>>> to kill(2). Calling ConditionVariablePrepareToSleep beforehand of the\n>>> loop would make it better.\n>>\n>> Sorry I failed to understand your point...\n> \n> My point is the ConditionVariableTimedSleep does *not* sleep on the CV\n> first time in this usage. The new version anyway avoids useless\n> kill(2) call, but still may make an extra call to\n> ReplicationSlotAcquireInternal. I think we should call\n> ConditionVariablePrepareToSleep before the sorrounding for statement\n> block.\n\nOK, so what about the attached patch? I added ConditionVariablePrepareToSleep()\njust before entering the \"for\" loop in InvalidateObsoleteReplicationSlots().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 17 Jun 2020 20:13:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "I think passing the slot name when the slot is also passed is useless\nand wasteful; it'd be better to pass NULL for the name and ignore the\nstrcmp() in that case -- in fact I suggest to forbid passing both name\nand slot. (Any failure there would risk raising an error during\ncheckpoint, which is undesirable.)\n\nSo I propose the following tweaks to your patch, and otherwise +1.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 17 Jun 2020 14:04:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020-Jun-17, Kyotaro Horiguchi wrote:\n\n> @@ -342,7 +351,14 @@ pg_get_replication_slots(PG_FUNCTION_ARGS)\n> \t\telse\n> \t\t\tnulls[i++] = true;\n> \n> -\t\twalstate = GetWALAvailability(slot_contents.data.restart_lsn);\n> +\t\t/* use last_invalidated_lsn when the slot is invalidated */\n> +\t\tif (XLogRecPtrIsInvalid(slot_contents.data.restart_lsn))\n> +\t\t\ttargetLSN = slot_contents.last_invalidated_lsn;\n> +\t\telse\n> +\t\t\ttargetLSN = slot_contents.data.restart_lsn;\n> +\n> +\t\twalstate = GetWALAvailability(targetLSN, last_removed_seg,\n> +\t\t\t\t\t\t\t\t\t slot_contents.active_pid != 0);\n\nYeah, this approach seems better overall. I'll see if I can get this\ndone after lunch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jun 2020 14:50:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "\n\nOn 2020/06/18 3:04, Alvaro Herrera wrote:\n> I think passing the slot name when the slot is also passed is useless\n> and wasteful; it'd be better to pass NULL for the name and ignore the\n> strcmp() in that case -- in fact I suggest to forbid passing both name\n> and slot. (Any failure there would risk raising an error during\n> checkpoint, which is undesirable.)\n\nSounds reasonable.\n\n> So I propose the following tweaks to your patch, and otherwise +1.\n\nThanks for the patch! It looks good to me.\n\nBarring any objections, I will commit the patches in the master and\nv13 branches later.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 18 Jun 2020 10:29:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Wed, 17 Jun 2020 20:13:01 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > ReplicationSlotAcquireInternal. I think we should call\n> > ConditionVariablePrepareToSleep before the sorrounding for statement\n> > block.\n> \n> OK, so what about the attached patch? I added\n> ConditionVariablePrepareToSleep()\n> just before entering the \"for\" loop in\n> InvalidateObsoleteReplicationSlots().\n\nThanks.\n\nReplicationSlotAcquireInternal:\n+ * If *slot == NULL, search for the slot with the given name.\n\n'*' seems needless here.\n\n\nThe patch moves ConditionVariablePrepareToSleep. We need to call the\nfunction before looking into active_pid as originally commented.\nSince it is not protected by ReplicationSlotControLock, just before\nreleasing the lock is not correct.\n\nThe attached on top of the v3 fixes that.\n\n\n\n+ s = (slot == NULL) ? SearchNamedReplicationSlot(name) : slot;\n+ if (s == NULL || !s->in_use || strcmp(name, NameStr(s->data.name)) != 0)\n\nThe conditions in the second line is needed for the case slot is\ngiven, but it is already done in SearchNamedReplicationSlot if slot is\nnot given. I would like something like the following instead, but I\ndon't insist on it.\n\n ReplicationSlot *s = NULL;\n ...\n if (!slot)\n s = SearchNamedReplicationSlot(name);\n else if(s->in_use && strcmp(name, NameStr(s->data.name)))\n s = slot;\n\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_UNDEFINED_OBJECT),\n+ errmsg(\"replication slot \\\"%s\\\" does not exist\", name)));\n\nThe error message is not right when the given slot doesn't match the\ngiven name. It might be better to leave it to the caller. Currently\nno such caller exists so I don't insist on this but the message should\nbe revised otherwise.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 18 Jun 2020 11:44:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020/06/18 11:44, Kyotaro Horiguchi wrote:\n> At Wed, 17 Jun 2020 20:13:01 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> ReplicationSlotAcquireInternal. I think we should call\n>>> ConditionVariablePrepareToSleep before the sorrounding for statement\n>>> block.\n>>\n>> OK, so what about the attached patch? I added\n>> ConditionVariablePrepareToSleep()\n>> just before entering the \"for\" loop in\n>> InvalidateObsoleteReplicationSlots().\n> \n> Thanks.\n\nThanks for the review!\n\n> \n> ReplicationSlotAcquireInternal:\n> + * If *slot == NULL, search for the slot with the given name.\n> \n> '*' seems needless here.\n\nFixed.\n\nAlso I added \"Only one of slot and name can be specified.\" into\nthe comments of ReplicationSlotAcquireInternal().\n\n\n> The patch moves ConditionVariablePrepareToSleep. We need to call the\n> function before looking into active_pid as originally commented.\n> Since it is not protected by ReplicationSlotControLock, just before\n> releasing the lock is not correct.\n> \n> The attached on top of the v3 fixes that.\n\nYes, you're right. I merged your 0001.patch into mine.\n\n+\t\tif (behavior != SAB_Inquire)\n+\t\t\tConditionVariablePrepareToSleep(&s->active_cv);\n+\telse if (behavior != SAB_Inquire)\n\nIsn't \"behavior == SAB_Block\" condition better here?\nI changed the patch that way.\n\nThe attached is the updated version of the patch.\nI also merged Alvaro's patch into this.\n\n\n> + s = (slot == NULL) ? SearchNamedReplicationSlot(name) : slot;\n> + if (s == NULL || !s->in_use || strcmp(name, NameStr(s->data.name)) != 0)\n> \n> The conditions in the second line is needed for the case slot is\n> given, but it is already done in SearchNamedReplicationSlot if slot is\n> not given. I would like something like the following instead, but I\n> don't insist on it.\n\nYes, I got rid of strcmp() check, but left is_use check as it is.\nI like that because it's simpler.\n\n\n> ReplicationSlot *s = NULL;\n> ...\n> if (!slot)\n> s = SearchNamedReplicationSlot(name);\n> else if(s->in_use && strcmp(name, NameStr(s->data.name)))\n> s = slot;\n> \n> \n> + ereport(ERROR,\n> + (errcode(ERRCODE_UNDEFINED_OBJECT),\n> + errmsg(\"replication slot \\\"%s\\\" does not exist\", name)));\n> \n> The error message is not right when the given slot doesn't match the\n> given name.\n\nThis doesn't happen after applying Alvaro's patch.\n\nBTW, using \"name\" here is not valid because it may be NULL.\nSo I added the following code and used \"slot_name\" in log messages.\n\n+\tslot_name = name ? name : NameStr(slot->data.name);\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 18 Jun 2020 14:40:55 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020/06/18 14:40, Fujii Masao wrote:\n> \n> \n> On 2020/06/18 11:44, Kyotaro Horiguchi wrote:\n>> At Wed, 17 Jun 2020 20:13:01 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>> ReplicationSlotAcquireInternal.� I think we should call\n>>>> ConditionVariablePrepareToSleep before the sorrounding for statement\n>>>> block.\n>>>\n>>> OK, so what about the attached patch? I added\n>>> ConditionVariablePrepareToSleep()\n>>> just before entering the \"for\" loop in\n>>> InvalidateObsoleteReplicationSlots().\n>>\n>> Thanks.\n> \n> Thanks for the review!\n> \n>>\n>> ReplicationSlotAcquireInternal:\n>> + * If *slot == NULL, search for the slot with the given name.\n>>\n>> '*' seems needless here.\n> \n> Fixed.\n> \n> Also I added \"Only one of slot and name can be specified.\" into\n> the comments of ReplicationSlotAcquireInternal().\n> \n> \n>> The patch moves ConditionVariablePrepareToSleep. We need to call the\n>> function before looking into active_pid as originally commented.\n>> Since it is not protected by ReplicationSlotControLock, just before\n>> releasing the lock is not correct.\n>>\n>> The attached on top of the v3 fixes that.\n> \n> Yes, you're right. I merged your 0001.patch into mine.\n> \n> +������� if (behavior != SAB_Inquire)\n> +����������� ConditionVariablePrepareToSleep(&s->active_cv);\n> +��� else if (behavior != SAB_Inquire)\n> \n> Isn't \"behavior == SAB_Block\" condition better here?\n> I changed the patch that way.\n> \n> The attached is the updated version of the patch.\n> I also merged Alvaro's patch into this.\n> \n> \n>> +�� s = (slot == NULL) ? SearchNamedReplicationSlot(name) : slot;\n>> +�� if (s == NULL || !s->in_use || strcmp(name, NameStr(s->data.name)) != 0)\n>>\n>> The conditions in the second line is needed for the case slot is\n>> given, but it is already done in SearchNamedReplicationSlot if slot is\n>> not given.� I would like something like the following instead, but I\n>> don't insist on it.\n> \n> Yes, I got rid of strcmp() check, but left is_use check as it is.\n> I like that because it's simpler.\n> \n> \n>> ���� ReplicationSlot *s = NULL;\n>> ���� ...\n>> ���� if (!slot)\n>> �������� s = SearchNamedReplicationSlot(name);\n>> ���� else if(s->in_use && strcmp(name, NameStr(s->data.name)))\n>> �������� s = slot;\n>>\n>>\n>> +������� ereport(ERROR,\n>> +��������������� (errcode(ERRCODE_UNDEFINED_OBJECT),\n>> +���������������� errmsg(\"replication slot \\\"%s\\\" does not exist\", name)));\n>>\n>> The error message is not right when the given slot doesn't match the\n>> given name.\n> \n> This doesn't happen after applying Alvaro's patch.\n> \n> BTW, using \"name\" here is not valid because it may be NULL.\n> So I added the following code and used \"slot_name\" in log messages.\n> \n> +��� slot_name = name ? name : NameStr(slot->data.name);\n\nSorry, this caused compiler failure. So I fixed that and\nattached the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 18 Jun 2020 14:54:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Thu, 18 Jun 2020 14:54:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Sorry, this caused compiler failure. So I fixed that and\n> attached the updated version of the patch.\n\nAt Thu, 18 Jun 2020 14:40:55 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > + errmsg(\"replication slot \\\"%s\\\" does not exist\", name)));\n> > The error message is not right when the given slot doesn't match the\n> > given name.\n> \n> This doesn't happen after applying Alvaro's patch.\n\nIf name is specified (so slot is NULL) to\nReplicationSlotAcquireInternal and the slot is not found, the ereport\nin following code dereferences NULL.\n\n====\n if (s == NULL || !s->in_use)\n {\n LWLockRelease(ReplicationSlotControlLock);\n\n if (behavior == SAB_Inquire)\n return -1;\n ereport(ERROR,\n (errcode(ERRCODE_UNDEFINED_OBJECT),\n errmsg(\"replication slot \\\"%s\\\" does not exist\",\n name ? name : NameStr(slot->data.name))));\n }\n====\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 18 Jun 2020 16:32:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "Mmm. I hurried too much..\n\nAt Thu, 18 Jun 2020 16:32:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> If name is specified (so slot is NULL) to\n> ReplicationSlotAcquireInternal and the slot is not found, the ereport\n> in following code dereferences NULL.\n\nThat's bogus. It is using name in that case. Sorry for the noise.\n\nI don't find a problem by a brief look on it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 18 Jun 2020 16:36:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "\n\nOn 2020/06/18 16:36, Kyotaro Horiguchi wrote:\n> Mmm. I hurried too much..\n> \n> At Thu, 18 Jun 2020 16:32:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> If name is specified (so slot is NULL) to\n>> ReplicationSlotAcquireInternal and the slot is not found, the ereport\n>> in following code dereferences NULL.\n> \n> That's bogus. It is using name in that case. Sorry for the noise.\n> \n> I don't find a problem by a brief look on it.\n\nThanks for the review! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Jun 2020 17:23:24 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020-Jun-17, Kyotaro Horiguchi wrote:\n\n> @@ -9524,7 +9533,7 @@ GetWALAvailability(XLogRecPtr targetLSN)\n> \t * the first WAL segment file since startup, which causes the status being\n> \t * wrong under certain abnormal conditions but that doesn't actually harm.\n> \t */\n> -\toldestSeg = XLogGetLastRemovedSegno() + 1;\n> +\toldestSeg = last_removed_seg + 1;\n> \n> \t/* calculate oldest segment by max_wal_size and wal_keep_segments */\n> \tXLByteToSeg(currpos, currSeg, wal_segment_size);\n\nThis hunk should have updated the comment two lines above. However:\n\n> @@ -272,6 +273,14 @@ pg_get_replication_slots(PG_FUNCTION_ARGS)\n> \trsinfo->setResult = tupstore;\n> \trsinfo->setDesc = tupdesc;\n> \n> +\t/*\n> +\t * Remember the last removed segment at this point for the consistency in\n> +\t * this table. Since there's no interlock between slot data and\n> +\t * checkpointer, the segment can be removed in-between, but that doesn't\n> +\t * make any practical difference.\n> +\t */\n> +\tlast_removed_seg = XLogGetLastRemovedSegno();\n\nI am mystified as to why you added this change. I understand that your\npoint here is to make all slots reported their state as compared to the\nsame LSN, but why do it like that? If a segment is removed in between,\nit could mean that the view reports more lies than it would if we update\nthe segno for each slot. I mean, suppose two slots are lagging behind\nand one is reported as 'extended' because when we compute it it's still\nin range; then a segment is removed. With your coding, we'll report\nboth as extended, but with the original coding, we'll report the new one\nas lost. By the time the user reads the result, they'll read one\nincorrect report with the original code, and two incorrect reports with\nyour code. So ... yes it might be more consistent, but what does that\nbuy the user?\n\nOTOH it makes GetWALAvailability gain a new argument, which we have to\ndocument.\n\n> +\t/*\n> +\t * However segments required by the slot has been lost, if walsender is\n> +\t * active the walsender can read into the first reserved slot.\n> +\t */\n> +\tif (slot_is_active)\n> +\t\treturn WALAVAIL_BEING_REMOVED;\n\nI don't understand this comment; can you please clarify what you mean?\n\nI admit I don't like this slot_is_active argument you propose to add to\nGetWALAvailability either; previously the function can be called with\nan LSN coming from anywhere, not just a slot; the new argument implies\nthat the LSN comes from a slot. (Your proposed patch doesn't document\nthis one either.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 18:23:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "Thanks for looking this.\n\nAt Fri, 19 Jun 2020 18:23:59 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jun-17, Kyotaro Horiguchi wrote:\n> \n> > @@ -9524,7 +9533,7 @@ GetWALAvailability(XLogRecPtr targetLSN)\n> > \t * the first WAL segment file since startup, which causes the status being\n> > \t * wrong under certain abnormal conditions but that doesn't actually harm.\n> > \t */\n> > -\toldestSeg = XLogGetLastRemovedSegno() + 1;\n> > +\toldestSeg = last_removed_seg + 1;\n> > \n> > \t/* calculate oldest segment by max_wal_size and wal_keep_segments */\n> > \tXLByteToSeg(currpos, currSeg, wal_segment_size);\n> \n> This hunk should have updated the comment two lines above. However:\n> \n> > @@ -272,6 +273,14 @@ pg_get_replication_slots(PG_FUNCTION_ARGS)\n> > \trsinfo->setResult = tupstore;\n> > \trsinfo->setDesc = tupdesc;\n> > \n> > +\t/*\n> > +\t * Remember the last removed segment at this point for the consistency in\n> > +\t * this table. Since there's no interlock between slot data and\n> > +\t * checkpointer, the segment can be removed in-between, but that doesn't\n> > +\t * make any practical difference.\n> > +\t */\n> > +\tlast_removed_seg = XLogGetLastRemovedSegno();\n> \n> I am mystified as to why you added this change. I understand that your\n> point here is to make all slots reported their state as compared to the\n> same LSN, but why do it like that? If a segment is removed in between,\n> it could mean that the view reports more lies than it would if we update\n> the segno for each slot. I mean, suppose two slots are lagging behind\n> and one is reported as 'extended' because when we compute it it's still\n> in range; then a segment is removed. With your coding, we'll report\n> both as extended, but with the original coding, we'll report the new one\n> as lost. By the time the user reads the result, they'll read one\n> incorrect report with the original code, and two incorrect reports with\n> your code. So ... yes it might be more consistent, but what does that\n> buy the user?\n\nI agree to you. Anyway the view may show \"wrong\" statuses if\nconcurrent WAL-file removal is running. But I can understand it is\nbetter that the numbers in a view are consistent. The change\ncontributes only to that point. So I noted as \"doesn't make any\npractical difference\". Since it is going to be removed, I removed the\nchanges for the part.\n\nhttps://www.postgresql.org/message-id/9ddfbf8c-2f67-904d-44ed-cf8bc5916228@oss.nttdata.com\n\n> OTOH it makes GetWALAvailability gain a new argument, which we have to\n> document.\n> \n> > +\t/*\n> > +\t * However segments required by the slot has been lost, if walsender is\n> > +\t * active the walsender can read into the first reserved slot.\n> > +\t */\n> > +\tif (slot_is_active)\n> > +\t\treturn WALAVAIL_BEING_REMOVED;\n> \n> I don't understand this comment; can you please clarify what you mean?\n\nI have had comments that the \"lost\" state should be a definite state,\nthat is, a state mustn't go back to other states. I had the same from\nFujii-san again.\n\nSuppose we are starting from the following situation:\n\nState A:\n|---- seg n-1 ----|---- seg n ----|\n ^\n X (restart_lsn of slot S) - max_slot_wal_keep_size \n\nIf the segment n-1 is removed, slot S's status becomes\n\"lost\". However, if the walsender that is using the slot has not been\nkilled yet, the point X can move foward to the segment n (State B).\n\nState B:\n|XXXX seg n-1 XXXX|---- seg n ----|\n ^\n X (restart_lsn of slot S) - max_slot_wal_keep_size \n\nThis is the normal (or extend) state. If we want to the state \"lost\"\nto be definitive, we cannot apply the state label \"lost\" to State A if\nit is active.\n\nWALAVAIL_BEING_REMOVED (I noticed it has been removed for a wrong\nreason so I revived it in this patch [1].) was used for the same state,\nthat is, the segment at restart_lsn will be removed soon but not yet.\n\n1: https://www.postgresql.org/message-id/20200406.185027.648866525989475817.horikyota.ntt@gmail.com\n\n> I admit I don't like this slot_is_active argument you propose to add to\n> GetWALAvailability either; previously the function can be called with\n> an LSN coming from anywhere, not just a slot; the new argument implies\n> that the LSN comes from a slot. (Your proposed patch doesn't document\n> this one either.)\n\nAgreed. I felt like you at the time. I came up with another way after\nhearing that from you.\n\nIn the attached GetWALAvailability() returns the state assuming the\nwalsender is not active. And the caller (pg_get_replication_slots())\nconsiders the case where the walsender is active.\n\nregares.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 23 Jun 2020 17:41:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020-Jun-16, Kyotaro Horiguchi wrote:\n\n> I saw that the \"reserved\" is the state where slots are working to\n> retain segments, and \"normal\" is the state to indicate that \"WAL\n> segments are within max_wal_size\", which is orthogonal to the notion\n> of \"reserved\". So it seems to me useless when the retained WAL\n> segments cannot exceeds max_wal_size.\n> \n> With longer description they would be:\n> \n> \"reserved under max_wal_size\"\n> \"reserved over max_wal_size\"\n> \"lost some segements\"\n\n> Come to think of that, I realized that my trouble was just the\n> wording. Are the following wordings make sense to you?\n> \n> \"reserved\" - retained within max_wal_size\n> \"extended\" - retained over max_wal_size\n> \"lost\" - lost some segments\n\nSo let's add Unreserved to denote the state that it's over the slot size\nbut no segments have been removed yet:\n\n* Reserved\tunder max_wal_size\n* Extended\tpast max_wal_size, but still within wal_keep_segments or\n \t\tmaximum slot size.\n* Unreserved\tPast wal_keep_segments and the maximum slot size, but\n\t\tnot yet removed. Recoverable condition.\n* Lost\t\tlost segments. Unrecoverable condition.\n\n\nIt seems better to me to save the invalidation LSN in the persistent\ndata rather than the in-memory data that's lost on restart. As is, we\nwould lose the status in a restart, which doesn't seem good to me. It's\njust eight extra bytes to write ... should be pretty much free.\n\nThis version I propose is based on the one you posted earlier today and\nis what I propose for commit.\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 23 Jun 2020 19:06:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Tue, 23 Jun 2020 19:06:25 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jun-16, Kyotaro Horiguchi wrote:\n> \n> > I saw that the \"reserved\" is the state where slots are working to\n> > retain segments, and \"normal\" is the state to indicate that \"WAL\n> > segments are within max_wal_size\", which is orthogonal to the notion\n> > of \"reserved\". So it seems to me useless when the retained WAL\n> > segments cannot exceeds max_wal_size.\n> > \n> > With longer description they would be:\n> > \n> > \"reserved under max_wal_size\"\n> > \"reserved over max_wal_size\"\n> > \"lost some segements\"\n> \n> > Come to think of that, I realized that my trouble was just the\n> > wording. Are the following wordings make sense to you?\n> > \n> > \"reserved\" - retained within max_wal_size\n> > \"extended\" - retained over max_wal_size\n> > \"lost\" - lost some segments\n> \n> So let's add Unreserved to denote the state that it's over the slot size\n> but no segments have been removed yet:\n\nOh! Thanks for the more proper word. It looks good to me.\n\n> * Reserved\tunder max_wal_size\n> * Extended\tpast max_wal_size, but still within wal_keep_segments or\n> \t\tmaximum slot size.\n> * Unreserved\tPast wal_keep_segments and the maximum slot size, but\n> \t\tnot yet removed. Recoverable condition.\n> * Lost\t\tlost segments. Unrecoverable condition.\n\nLook good, too.\n\n> It seems better to me to save the invalidation LSN in the persistent\n> data rather than the in-memory data that's lost on restart. As is, we\n> would lose the status in a restart, which doesn't seem good to me. It's\n> just eight extra bytes to write ... should be pretty much free.\n\nAgreed.\n\n> This version I propose is based on the one you posted earlier today and\n> is what I propose for commit.\n\n\n-\t/* slot does not reserve WAL. Either deactivated, or has never been active */\n+\t/*\n+\t * slot does not reserve WAL. Either deactivated, or has never been active\n+\t */\n\nSorry, this is my fault. The change is useless. The code for\nWALAVAIL_REMOVED looks good.\n\n\n # Advance WAL again without checkpoint, reducing remain by 6 MB.\n+$result = $node_master->safe_psql('postgres',\n+\t\"SELECT wal_status, restart_lsn, min_safe_lsn FROM pg_replication_slots WHERE slot_name = 'rep1'\"\n+);\n+print $result, \"\\n\";\n\nSorry this is my fault, too. Removed in the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 24 Jun 2020 11:15:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "Thanks for those corrections.\n\nI have pushed this. I think all problems Masao-san reported have been\ndealt with, so we're done here.\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jun 2020 14:27:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "\n\nOn 2020/06/25 3:27, Alvaro Herrera wrote:\n> Thanks for those corrections.\n> \n> I have pushed this. I think all problems Masao-san reported have been\n> dealt with, so we're done here.\n\nSorry for my late to reply here...\n\nThanks for committing the patch and improving the feature!\n\n\t/*\n\t * Find the oldest extant segment file. We get 1 until checkpoint removes\n\t * the first WAL segment file since startup, which causes the status being\n\t * wrong under certain abnormal conditions but that doesn't actually harm.\n\t */\n\toldestSeg = XLogGetLastRemovedSegno() + 1;\n\nI see the point of the above comment, but this can cause wal_status to be\nchanged from \"lost\" to \"unreserved\" after the server restart. Isn't this\nreally confusing? At least it seems better to document that behavior.\n\nOr if we *can ensure* that the slot with invalidated_at set always means\n\"lost\" slot, we can judge that wal_status is \"lost\" without using fragile\nXLogGetLastRemovedSegno(). Thought?\n\nOr XLogGetLastRemovedSegno() should be fixed so that it returns valid\nvalue even after the restart?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 25 Jun 2020 12:34:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020-Jun-25, Fujii Masao wrote:\n\n> \t/*\n> \t * Find the oldest extant segment file. We get 1 until checkpoint removes\n> \t * the first WAL segment file since startup, which causes the status being\n> \t * wrong under certain abnormal conditions but that doesn't actually harm.\n> \t */\n> \toldestSeg = XLogGetLastRemovedSegno() + 1;\n> \n> I see the point of the above comment, but this can cause wal_status to be\n> changed from \"lost\" to \"unreserved\" after the server restart. Isn't this\n> really confusing? At least it seems better to document that behavior.\n\nHmm.\n\n> Or if we *can ensure* that the slot with invalidated_at set always means\n> \"lost\" slot, we can judge that wal_status is \"lost\" without using fragile\n> XLogGetLastRemovedSegno(). Thought?\n\nHmm, this sounds compelling -- I think it just means we need to ensure\nwe reset invalidated_at to zero if the slot's restart_lsn is set to a\ncorrect position afterwards. I don't think we have any operation that\ndoes that, so it should be safe -- hopefully I didn't overlook anything?\nNeither copy nor advance seem to work with a slot that has invalid\nrestart_lsn.\n\n> Or XLogGetLastRemovedSegno() should be fixed so that it returns valid\n> value even after the restart?\n\nThis seems more work to implement.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jun 2020 23:57:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "\n\nOn 2020/06/25 12:57, Alvaro Herrera wrote:\n> On 2020-Jun-25, Fujii Masao wrote:\n> \n>> \t/*\n>> \t * Find the oldest extant segment file. We get 1 until checkpoint removes\n>> \t * the first WAL segment file since startup, which causes the status being\n>> \t * wrong under certain abnormal conditions but that doesn't actually harm.\n>> \t */\n>> \toldestSeg = XLogGetLastRemovedSegno() + 1;\n>>\n>> I see the point of the above comment, but this can cause wal_status to be\n>> changed from \"lost\" to \"unreserved\" after the server restart. Isn't this\n>> really confusing? At least it seems better to document that behavior.\n> \n> Hmm.\n> \n>> Or if we *can ensure* that the slot with invalidated_at set always means\n>> \"lost\" slot, we can judge that wal_status is \"lost\" without using fragile\n>> XLogGetLastRemovedSegno(). Thought?\n> \n> Hmm, this sounds compelling -- I think it just means we need to ensure\n> we reset invalidated_at to zero if the slot's restart_lsn is set to a\n> correct position afterwards.\n\nYes.\n\n> I don't think we have any operation that\n> does that, so it should be safe -- hopefully I didn't overlook anything?\n\nWe need to call ReplicationSlotMarkDirty() and ReplicationSlotSave()\njust after setting invalidated_at and restart_lsn in InvalidateObsoleteReplicationSlots()?\nOtherwise, restart_lsn can go back to the previous value after the restart.\n\ndiff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\nindex e8761f3a18..5584e5dd2c 100644\n--- a/src/backend/replication/slot.c\n+++ b/src/backend/replication/slot.c\n@@ -1229,6 +1229,13 @@ restart:\n s->data.invalidated_at = s->data.restart_lsn;\n s->data.restart_lsn = InvalidXLogRecPtr;\n SpinLockRelease(&s->mutex);\n+\n+ /*\n+ * Save this invalidated slot to disk, to ensure that the slot\n+ * is still invalid even after the server restart.\n+ */\n+ ReplicationSlotMarkDirty();\n+ ReplicationSlotSave();\n ReplicationSlotRelease();\n \n /* if we did anything, start from scratch */\n\nMaybe we don't need to do this if the slot is temporary?\n\n> Neither copy nor advance seem to work with a slot that has invalid\n> restart_lsn.\n> \n>> Or XLogGetLastRemovedSegno() should be fixed so that it returns valid\n>> value even after the restart?\n> \n> This seems more work to implement.\n\nYes.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 25 Jun 2020 14:35:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "At Thu, 25 Jun 2020 14:35:34 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/06/25 12:57, Alvaro Herrera wrote:\n> > On 2020-Jun-25, Fujii Masao wrote:\n> > \n> >> \t/*\n> >> \t * Find the oldest extant segment file. We get 1 until checkpoint removes\n> >> \t * the first WAL segment file since startup, which causes the status being\n> >> \t * wrong under certain abnormal conditions but that doesn't actually harm.\n> >> \t */\n> >> \toldestSeg = XLogGetLastRemovedSegno() + 1;\n> >>\n> >> I see the point of the above comment, but this can cause wal_status to\n> >> be\n> >> changed from \"lost\" to \"unreserved\" after the server restart. Isn't\n> >> this\n> >> really confusing? At least it seems better to document that behavior.\n> > Hmm.\n> >\n> >> Or if we *can ensure* that the slot with invalidated_at set always\n> >> means\n> >> \"lost\" slot, we can judge that wal_status is \"lost\" without using\n> >> fragile\n> >> XLogGetLastRemovedSegno(). Thought?\n> > Hmm, this sounds compelling -- I think it just means we need to ensure\n> > we reset invalidated_at to zero if the slot's restart_lsn is set to a\n> > correct position afterwards.\n> \n> Yes.\n\nIt is error-prone restriction, as discussed before.\n\nWithout changing updator-side, invalid restart_lsn AND valid\ninvalidated_at can be regarded as the lost state. With the following\nchange suggested by Fujii-san we can avoid the confusing status.\n\nWith attached first patch on top of the slot-dirtify fix below, we get\n\"lost\" for invalidated slots after restart.\n\n> > I don't think we have any operation that\n> > does that, so it should be safe -- hopefully I didn't overlook\n> > anything?\n> \n> We need to call ReplicationSlotMarkDirty() and ReplicationSlotSave()\n> just after setting invalidated_at and restart_lsn in\n> InvalidateObsoleteReplicationSlots()?\n> Otherwise, restart_lsn can go back to the previous value after the\n> restart.\n> \n> diff --git a/src/backend/replication/slot.c\n> b/src/backend/replication/slot.c\n> index e8761f3a18..5584e5dd2c 100644\n> --- a/src/backend/replication/slot.c\n> +++ b/src/backend/replication/slot.c\n> @@ -1229,6 +1229,13 @@ restart:\n> s->data.invalidated_at = s->data.restart_lsn;\n> s->data.restart_lsn = InvalidXLogRecPtr;\n> SpinLockRelease(&s->mutex);\n> +\n> + /*\n> + * Save this invalidated slot to disk, to ensure that the slot\n> + * is still invalid even after the server restart.\n> + */\n> + ReplicationSlotMarkDirty();\n> + ReplicationSlotSave();\n> ReplicationSlotRelease();\n> /* if we did anything, start from scratch */\n> \n> Maybe we don't need to do this if the slot is temporary?\n\nThe only difference of temprary slots from persistent one seems to be\nan attribute \"persistency\". Actually,\ncreate_physica_replication_slot() does the aboves for temporary slots.\n\n> > Neither copy nor advance seem to work with a slot that has invalid\n> > restart_lsn.\n> > \n> >> Or XLogGetLastRemovedSegno() should be fixed so that it returns valid\n> >> value even after the restart?\n> > This seems more work to implement.\n> \n> Yes.\n\nThe confusing status can be avoided without fixing it, but I prefer to\nfix it. As Fujii-san suggested upthread, couldn't we remember\nlastRemovedSegNo in the contorl file? (Yeah, it cuases a bump of\nPG_CONTROL_VERSION and CATALOG_VERSION_NO?).\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 25 Jun 2020 17:28:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
},
{
"msg_contents": "On 2020-Jun-25, Kyotaro Horiguchi wrote:\n\n> It is error-prone restriction, as discussed before.\n> \n> Without changing updator-side, invalid restart_lsn AND valid\n> invalidated_at can be regarded as the lost state. With the following\n> change suggested by Fujii-san we can avoid the confusing status.\n> \n> With attached first patch on top of the slot-dirtify fix below, we get\n> \"lost\" for invalidated slots after restart.\n\nMakes sense. I pushed this change, thanks.\n\n\n> The confusing status can be avoided without fixing it, but I prefer to\n> fix it. As Fujii-san suggested upthread, couldn't we remember\n> lastRemovedSegNo in the contorl file? (Yeah, it cuases a bump of\n> PG_CONTROL_VERSION and CATALOG_VERSION_NO?).\n\nI think that's a pg14 change. Feel free to submit a patch to the\ncommitfest.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Jun 2020 20:46:10 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Review for GetWALAvailability()"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nWould some additional procedure language handler code examples in the\ndocumentation be good to add? I've put some together in the attached\npatch, and can log it to a future commitfest if people think it would\nbe a good addition.\n\nRegards,\nMark\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/",
"msg_date": "Fri, 12 Jun 2020 10:26:48 -0700",
"msg_from": "Mark Wong <mark@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "doc examples for pghandler"
},
{
"msg_contents": "Mark Wong <mark@2ndquadrant.com> writes:\n> Would some additional procedure language handler code examples in the\n> documentation be good to add? I've put some together in the attached\n> patch, and can log it to a future commitfest if people think it would\n> be a good addition.\n\nHmm. The existing doc examples are really pretty laughable, because\nthere's such a large gap between the offered skeleton and a workable\nhandler. So I agree it'd be nice to do better, but I'm suspicious of\nhaving large chunks of sample code in the docs --- that's a maintenance\nproblem, if only because we likely won't notice when we break it.\nAlso, if somebody is hoping to copy-and-paste such code, it isn't\nthat nice to work from if it's embedded in SGML.\n\nI wonder if it'd be possible to adapt what you have here into some\ntiny contrib module that doesn't do very much useful, but can at\nleast be tested to see that it compiles; moreover it could be\ncopied verbatim to serve as a starting point for a new PL.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jun 2020 15:10:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 03:10:20PM -0400, Tom Lane wrote:\n> Mark Wong <mark@2ndquadrant.com> writes:\n> > Would some additional procedure language handler code examples in the\n> > documentation be good to add? I've put some together in the attached\n> > patch, and can log it to a future commitfest if people think it would\n> > be a good addition.\n> \n> Hmm. The existing doc examples are really pretty laughable, because\n> there's such a large gap between the offered skeleton and a workable\n> handler. So I agree it'd be nice to do better, but I'm suspicious of\n> having large chunks of sample code in the docs --- that's a maintenance\n> problem, if only because we likely won't notice when we break it.\n> Also, if somebody is hoping to copy-and-paste such code, it isn't\n> that nice to work from if it's embedded in SGML.\n> \n> I wonder if it'd be possible to adapt what you have here into some\n> tiny contrib module that doesn't do very much useful, but can at\n> least be tested to see that it compiles; moreover it could be\n> copied verbatim to serve as a starting point for a new PL.\n\nI do have the code examples in a repo. [1] The 0.4 directory consists\nof everything the examples show. \n\nIt would be easy enough to adapt that for contrib, and move some of the\ncontent from the doc patch into that. Then touch up the handler chapter\nto reference the contrib module.\n\nDoes that sound more useful?\n\n[1] https://gitlab.com/markwkm/yappl\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Fri, 12 Jun 2020 15:35:25 -0700",
"msg_from": "Mark Wong <mark@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "Mark Wong <mark@2ndquadrant.com> writes:\n> On Fri, Jun 12, 2020 at 03:10:20PM -0400, Tom Lane wrote:\n>> I wonder if it'd be possible to adapt what you have here into some\n>> tiny contrib module that doesn't do very much useful, but can at\n>> least be tested to see that it compiles; moreover it could be\n>> copied verbatim to serve as a starting point for a new PL.\n\n> I do have the code examples in a repo. [1] The 0.4 directory consists\n> of everything the examples show. \n\n> It would be easy enough to adapt that for contrib, and move some of the\n> content from the doc patch into that. Then touch up the handler chapter\n> to reference the contrib module.\n\nOn second thought, contrib/ is not quite the right place, because we\ntypically expect modules there to actually get installed, meaning they\nhave to have at least some end-user usefulness. The right place for\na toy PL handler is probably src/test/modules/; compare for example\nsrc/test/modules/test_parser/, which is serving quite the same sort\nof purpose as a skeleton text search parser.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jun 2020 22:13:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 10:13:41PM -0400, Tom Lane wrote:\n> On second thought, contrib/ is not quite the right place, because we\n> typically expect modules there to actually get installed, meaning they\n> have to have at least some end-user usefulness. The right place for\n> a toy PL handler is probably src/test/modules/; compare for example\n> src/test/modules/test_parser/, which is serving quite the same sort\n> of purpose as a skeleton text search parser.\n\n+1 for src/test/modules/, and if you can provide some low-level API\ncoverage through this module, that's even better.\n--\nMichael",
"msg_date": "Sat, 13 Jun 2020 13:19:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "On Sat, Jun 13, 2020 at 01:19:17PM +0900, Michael Paquier wrote:\n> On Fri, Jun 12, 2020 at 10:13:41PM -0400, Tom Lane wrote:\n> > On second thought, contrib/ is not quite the right place, because we\n> > typically expect modules there to actually get installed, meaning they\n> > have to have at least some end-user usefulness. The right place for\n> > a toy PL handler is probably src/test/modules/; compare for example\n> > src/test/modules/test_parser/, which is serving quite the same sort\n> > of purpose as a skeleton text search parser.\n> \n> +1 for src/test/modules/, and if you can provide some low-level API\n> coverage through this module, that's even better.\n\nSounds good to me. Something more like the attached patch?\n\nRegards,\nMark\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/",
"msg_date": "Sun, 14 Jun 2020 20:45:17 -0700",
"msg_from": "Mark Wong <mark@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "On Sun, Jun 14, 2020 at 08:45:17PM -0700, Mark Wong wrote:\n> Sounds good to me. Something more like the attached patch?\n\nThat's the idea. I have not gone in details into what you have here,\nbut perhaps it would make sense to do a bit more and show how things\nare done in the context of a PL function called in a trigger? Your\npatch removes from the docs a code block that outlined that.\n--\nMichael",
"msg_date": "Mon, 15 Jun 2020 16:47:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 04:47:01PM +0900, Michael Paquier wrote:\n> On Sun, Jun 14, 2020 at 08:45:17PM -0700, Mark Wong wrote:\n> > Sounds good to me. Something more like the attached patch?\n> \n> That's the idea. I have not gone in details into what you have here,\n> but perhaps it would make sense to do a bit more and show how things\n> are done in the context of a PL function called in a trigger? Your\n> patch removes from the docs a code block that outlined that.\n\nAh, right. For the moment I've added some empty conditionals for\ntrigger and event trigger handling.\n\nI've created a new entry in the commitfest app. [1] I'll keep at it. :)\n\nRegards,\nMark\n\n[1] https://commitfest.postgresql.org/29/2678/\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/",
"msg_date": "Tue, 11 Aug 2020 13:01:10 -0700",
"msg_from": "Mark Wong <mark@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 01:01:10PM -0700, Mark Wong wrote:\n> Ah, right. For the moment I've added some empty conditionals for\n> trigger and event trigger handling.\n> \n> I've created a new entry in the commitfest app. [1] I'll keep at it. :)\n\nThanks for the patch. I have reviewed and reworked it as the\nattached. Some comments below.\n\n+PGFILEDESC = \"PL/Sample - procedural language\"\n+\n+REGRESS = create_pl create_func select_func\n+\n+EXTENSION = plsample\n+EXTVERSION = 0.1\n\nThis makefile has a couple of mistakes, and can be simplified a lot:\n- make check does not work, as you forgot a PGXS part.\n- MODULES can just be used as there is only one file (forgot WIN32RES\nin OBJS for example)\n- DATA does not need the .control file.\n\n.gitignore was missing.\n\nWe could just use 1.0 instead of 0.1 for the version number. That's\nnot a big deal one way or another, but 1.0 is more consistent with the\nother modules.\n\nplsample--1.0.sql should complain if attempting to load the file from\npsql. Also I have cleaned up the README.\n\nNot sure that there is a point in having three different files for the\nregression tests. create_pl.sql is actually not necessary as you\ncan do the same with CREATE EXTENSION.\n\nThe header list of plsample.c was inconsistent with the style used\nnormally in modules, and I have extended a bit the handler function so\nas we return a result only if the return type of the procedure is text\nfor the source text of the function, tweaked the results a bit, etc.\nThere was a family of small issues, like using ALLOCSET_SMALL_SIZES\nfor the context creation. We could of course expand the sample\nhandler more in the future to check for pseudotype results, have a\nvalidator, but that could happen later, if necessary.\n--\nMichael",
"msg_date": "Fri, 14 Aug 2020 14:25:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 02:25:52PM +0900, Michael Paquier wrote:\n> On Tue, Aug 11, 2020 at 01:01:10PM -0700, Mark Wong wrote:\n> > Ah, right. For the moment I've added some empty conditionals for\n> > trigger and event trigger handling.\n> > \n> > I've created a new entry in the commitfest app. [1] I'll keep at it. :)\n> \n> Thanks for the patch. I have reviewed and reworked it as the\n> attached. Some comments below.\n> \n> +PGFILEDESC = \"PL/Sample - procedural language\"\n> +\n> +REGRESS = create_pl create_func select_func\n> +\n> +EXTENSION = plsample\n> +EXTVERSION = 0.1\n> \n> This makefile has a couple of mistakes, and can be simplified a lot:\n> - make check does not work, as you forgot a PGXS part.\n> - MODULES can just be used as there is only one file (forgot WIN32RES\n> in OBJS for example)\n> - DATA does not need the .control file.\n> \n> .gitignore was missing.\n> \n> We could just use 1.0 instead of 0.1 for the version number. That's\n> not a big deal one way or another, but 1.0 is more consistent with the\n> other modules.\n> \n> plsample--1.0.sql should complain if attempting to load the file from\n> psql. Also I have cleaned up the README.\n> \n> Not sure that there is a point in having three different files for the\n> regression tests. create_pl.sql is actually not necessary as you\n> can do the same with CREATE EXTENSION.\n> \n> The header list of plsample.c was inconsistent with the style used\n> normally in modules, and I have extended a bit the handler function so\n> as we return a result only if the return type of the procedure is text\n> for the source text of the function, tweaked the results a bit, etc.\n> There was a family of small issues, like using ALLOCSET_SMALL_SIZES\n> for the context creation. We could of course expand the sample\n> handler more in the future to check for pseudotype results, have a\n> validator, but that could happen later, if necessary.\n\nThanks for fixing all of that up for me. I did have a couple mental\nnotes for a couple of the last items. :)\n\nI've attached a small word diff to suggest a few different words to use\nin the README, if that sounds better?\n\nRegards,\nMark\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/",
"msg_date": "Mon, 17 Aug 2020 16:30:07 -0700",
"msg_from": "Mark Wong <mark@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc examples for pghandler"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 04:30:07PM -0700, Mark Wong wrote:\n> I've attached a small word diff to suggest a few different words to use\n> in the README, if that sounds better?\n\nSounds good to me. So applied with those changes. It is really\ntempting to add an example of validator (one simple thing would be to\nreturn an error if trying to use TRIGGEROID or EVTTRIGGEROID), but\nthat may not be the best thing to do for a test module. And what we\nhave here is already much better than the original docs.\n--\nMichael",
"msg_date": "Tue, 18 Aug 2020 11:14:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc examples for pghandler"
}
] |
[
{
"msg_contents": "-Hackers,\n\nI came across this today [1], \"\n3 Results\n\nIn most respects, PostgreSQL behaved as expected: both read uncommitted and\nread committed prevent write skew and aborted reads. We observed no\ninternal consistency violations. However, we have two surprising results to\nreport. The first is that PostgreSQL’s “repeatable read” is weaker than\nrepeatable read, at least as defined by Berenson, Adya, Bailis, et al. This\nis not necessarily wrong: the ANSI SQL standard is ambiguous. The second\nresult, which is definitely wrong, is that PostgreSQL’s “serializable”\nisolation level isn’t serializable: it allows G2-item during normal\noperation. \"\n\nThanks!\n\nJD\n\n1. https://jepsen.io/analyses/postgresql-12.3\n\n-Hackers,I came across this today [1], \"3 ResultsIn most respects, PostgreSQL behaved as expected: both read uncommitted and read committed prevent write skew and aborted reads. We observed no internal consistency violations. However, we have two surprising results to report. The first is that PostgreSQL’s “repeatable read” is weaker than repeatable read, at least as defined by Berenson, Adya, Bailis, et al. This is not necessarily wrong: the ANSI SQL standard is ambiguous. The second result, which is definitely wrong, is that PostgreSQL’s “serializable” isolation level isn’t serializable: it allows G2-item during normal operation. \"Thanks!JD1. https://jepsen.io/analyses/postgresql-12.3",
"msg_date": "Fri, 12 Jun 2020 10:58:25 -0700",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Serializable wrong?"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 6:58 PM Joshua Drake <jd@commandprompt.com> wrote:\n\n> -Hackers,\n>\n> I came across this today [1], \"\n> 3 Results\n>\n> In most respects, PostgreSQL behaved as expected: both read uncommitted\n> and read committed prevent write skew and aborted reads. We observed no\n> internal consistency violations. However, we have two surprising results to\n> report. The first is that PostgreSQL’s “repeatable read” is weaker than\n> repeatable read, at least as defined by Berenson, Adya, Bailis, et al. This\n> is not necessarily wrong: the ANSI SQL standard is ambiguous. The second\n> result, which is definitely wrong, is that PostgreSQL’s “serializable”\n> isolation level isn’t serializable: it allows G2-item during normal\n> operation. \"\n>\n> Thanks!\n>\n> JD\n>\n> 1. https://jepsen.io/analyses/postgresql-12.3\n>\n\nYes, this has been reported and is under discussion in pgsql-bugs list:\n\nhttps://www.postgresql.org/message-id/db7b729d-0226-d162-a126-8a8ab2dc4443%40jepsen.io\n\nOn Fri, Jun 12, 2020 at 6:58 PM Joshua Drake <jd@commandprompt.com> wrote:-Hackers,I came across this today [1], \"3 ResultsIn most respects, PostgreSQL behaved as expected: both read uncommitted and read committed prevent write skew and aborted reads. We observed no internal consistency violations. However, we have two surprising results to report. The first is that PostgreSQL’s “repeatable read” is weaker than repeatable read, at least as defined by Berenson, Adya, Bailis, et al. This is not necessarily wrong: the ANSI SQL standard is ambiguous. The second result, which is definitely wrong, is that PostgreSQL’s “serializable” isolation level isn’t serializable: it allows G2-item during normal operation. \"Thanks!JD1. https://jepsen.io/analyses/postgresql-12.3Yes, this has been reported and is under discussion in pgsql-bugs list:https://www.postgresql.org/message-id/db7b729d-0226-d162-a126-8a8ab2dc4443%40jepsen.io",
"msg_date": "Fri, 12 Jun 2020 19:19:03 +0100",
"msg_from": "Pantelis Theodosiou <ypercube@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Serializable wrong?"
}
] |
[
{
"msg_contents": "Posgres13_beta1, is consistently writing to the logs, \"could not rename\ntemporary statistics file\".\nWhen analyzing the source that writes the log, I simplified the part that\nwrites the logs a little.\n\n1. I changed from if else if to if\n2. For the user, better to have more errors recorded, which can help in\ndiscovering the problem\n3. Errors are independent of each other\n4. If I can't release tmpfile, there's no way to delete it (unlink)\n5. If I can rename, there is no need to delete it (unlink) tmpfile.\n\nAttached is the patch that proposes these changes.\n\nNow, the problem has not been solved.\n1. statfile, is it really closed or does it not exist in the directory?\n There is no way to rename a file, which is open and in use.\n2. Why delete (pgstat_stat_filename), if permanent is true:\nconst char * statfile = permanent? PGSTAT_STAT_PERMANENT_FILENAME:\npgstat_stat_filename;\nstatfile is PGSTAT_STAT_PERMANENT_FILENAME and not pgstat_stat_filename\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 12 Jun 2020 15:15:52 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "Em sex., 12 de jun. de 2020 às 15:15, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Posgres13_beta1, is consistently writing to the logs, \"could not rename\n> temporary statistics file\".\n> When analyzing the source that writes the log, I simplified the part that\n> writes the logs a little.\n>\n> 1. I changed from if else if to if\n> 2. For the user, better to have more errors recorded, which can help in\n> discovering the problem\n> 3. Errors are independent of each other\n> 4. If I can't release tmpfile, there's no way to delete it (unlink)\n> 5. If I can rename, there is no need to delete it (unlink) tmpfile.\n>\nFix for case 5.",
"msg_date": "Fri, 12 Jun 2020 19:03:30 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "posix rename, \"renames a file, moving it between directories if required\".\n\npgrename, win32 port uses MoveFileEx, to support rename files at Windows\nside,\nbut, actually don't allow \"renames a file, moving it between directories if\nrequired\".\n\nTo match the same characteristics as posix rename, we need to add a flag to\nMoveFileEx (MOVEFILE_COPY_ALLOWED)\nWhich allows, if necessary, to move between volumes, drives and directories.\n\nSuch a resource seems to decrease the chances of occurring, permission\ndenied, when renaming the temporary statistics file.",
"msg_date": "Sun, 14 Jun 2020 13:41:50 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 03:15:52PM -0300, Ranier Vilela wrote:\n> Posgres13_beta1, is consistently writing to the logs, \"could not rename\n> temporary statistics file\".\n> When analyzing the source that writes the log, I simplified the part that\n> writes the logs a little.\n\nFWIW, I have been running a server on Windows for some time with\npgbench running in the background, combined with some starts and stops\nbut I cannot see that. This log entry uses LOG, which is the level we\nuse for the TAP tests and please note that there are four buildfarm\nanimals for Windows able to run the TAP tests and they don't seem to\nshow that problem either: drongo, fairywen, jacana and bowerbird. I\nmay be missing something of course.\n--\nMichael",
"msg_date": "Mon, 15 Jun 2020 11:07:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 03:15:52PM -0300, Ranier Vilela wrote:\n> Posgres13_beta1, is consistently writing to the logs, \"could not rename\n> temporary statistics file\".\n> When analyzing the source that writes the log, I simplified the part that\n> writes the logs a little.\n\nWhat windows version and compiler ?\n\nPlease show the full CSV log for this event, and not an excerpt.\nPreferably with several lines of \"context\" for the stats process PID, with\nlog_min_messages=debug or debug2 and log_error_verbosity=verbose, so that you\nget the file location where it's erroring, if you don't already know that.\n\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n> 1. I changed from if else if to if\n> 2. For the user, better to have more errors recorded, which can help in\n> discovering the problem\n> 3. Errors are independent of each other\n> 4. If I can't release tmpfile, there's no way to delete it (unlink)\n> 5. If I can rename, there is no need to delete it (unlink) tmpfile.\n> \n> Attached is the patch that proposes these changes.\n> Now, the problem has not been solved.\n\nIt sounds like you haven't yet found the problem, right ? These are all\nunrelated changes which are confusing the problem report and discussion.\nAnd introducing behavior regressions, like renaming files with write errors on\ntop of known good files.\n\nI think you'll want to 1) identify where the problem is occuring, and attach a\ndebugger there.\n\n2) figure out when the problem was introduced. If this problem doesn't happen\nunder v12:\n\ngit log --cherry-pick -p origin/REL_12_STABLE...origin/REL_13_STABLE -- src/backend/postmaster/pgstat.c\nor just:\ngit log -p origin/REL_12_STABLE.. src/backend/postmaster/pgstat.c\n\nYou could try git-bisecting between v12..v13, but there's only 30 commits which\ntouched pgstat.c (assuming that's where the ERROR is being thrown).\n\nDo you have a special value of stats_temp_directory?\nOr a symlink or junction at pg_stat_tmp ?\n\n> 1. statfile, is it really closed or does it not exist in the directory?\n> There is no way to rename a file, which is open and in use.\n> 2. Why delete (pgstat_stat_filename), if permanent is true:\n> const char * statfile = permanent? PGSTAT_STAT_PERMANENT_FILENAME:\n> pgstat_stat_filename;\n> statfile is PGSTAT_STAT_PERMANENT_FILENAME and not pgstat_stat_filename\n\nYou can find answers to a lot of questions in the git history. In this case,\n70d756970 and 187492b6c.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 14 Jun 2020 21:53:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "Em dom., 14 de jun. de 2020 às 23:08, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Fri, Jun 12, 2020 at 03:15:52PM -0300, Ranier Vilela wrote:\n> > Posgres13_beta1, is consistently writing to the logs, \"could not rename\n> > temporary statistics file\".\n> > When analyzing the source that writes the log, I simplified the part that\n> > writes the logs a little.\n>\n> FWIW, I have been running a server on Windows for some time with\n> pgbench running in the background, combined with some starts and stops\n> but I cannot see that. This log entry uses LOG, which is the level we\n> use for the TAP tests and please note that there are four buildfarm\n> animals for Windows able to run the TAP tests and they don't seem to\n> show that problem either: drongo, fairywen, jacana and bowerbird. I\n> may be missing something of course\n\nHi Michael, thsnks for answer.\nYeah, something is wrong with Postgres and Windows 10 (not server) with\nmsvc 2019 (64 bits)\nII already reported on another thread, that vcregress is failing with\n(float8 and partitionprune) and now these messages are showing up.\nNone buildfarm animal, have that combination, but as Postgres officially\nsupports it ..\n\nregards,\nRanier Vilela\n\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>\nLivre\nde vírus. www.avast.com\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>.\n<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>\n\nEm dom., 14 de jun. de 2020 às 23:08, Michael Paquier <michael@paquier.xyz> escreveu:On Fri, Jun 12, 2020 at 03:15:52PM -0300, Ranier Vilela wrote:\n> Posgres13_beta1, is consistently writing to the logs, \"could not rename\n> temporary statistics file\".\n> When analyzing the source that writes the log, I simplified the part that\n> writes the logs a little.\n\nFWIW, I have been running a server on Windows for some time with\npgbench running in the background, combined with some starts and stops\nbut I cannot see that. This log entry uses LOG, which is the level we\nuse for the TAP tests and please note that there are four buildfarm\nanimals for Windows able to run the TAP tests and they don't seem to\nshow that problem either: drongo, fairywen, jacana and bowerbird. I\nmay be missing something of courseHi Michael, thsnks for answer.Yeah, something is wrong with Postgres and Windows 10 (not server) with msvc 2019 (64 bits)II already reported on another thread, that vcregress is failing with (float8 and partitionprune) and now these messages are showing up.None buildfarm animal, have that combination, but as Postgres officially supports it ..regards,Ranier Vilela \n\n\nLivre de vírus. www.avast.com.",
"msg_date": "Mon, 15 Jun 2020 09:49:31 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "Em dom., 14 de jun. de 2020 às 23:53, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Fri, Jun 12, 2020 at 03:15:52PM -0300, Ranier Vilela wrote:\n> > Posgres13_beta1, is consistently writing to the logs, \"could not rename\n> > temporary statistics file\".\n> > When analyzing the source that writes the log, I simplified the part that\n> > writes the logs a little.\n>\n> What windows version and compiler ?\n>\nWindows 10 (2004, msvc 2019 (64 bits)\nNone configuration, only git clone and build.bat\n\n\n>\n> Please show the full CSV log for this event, and not an excerpt.\n> Preferably with several lines of \"context\" for the stats process PID, with\n> log_min_messages=debug or debug2 and log_error_verbosity=verbose, so that\n> you\n> get the file location where it's erroring, if you don't already know that.\n>\n> https://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n>\n> Ok, I will provide.\n\n\n\n> > 1. I changed from if else if to if\n> > 2. For the user, better to have more errors recorded, which can help in\n> > discovering the problem\n> > 3. Errors are independent of each other\n> > 4. If I can't release tmpfile, there's no way to delete it (unlink)\n> > 5. If I can rename, there is no need to delete it (unlink) tmpfile.\n> >\n> > Attached is the patch that proposes these changes.\n> > Now, the problem has not been solved.\n>\n> It sounds like you haven't yet found the problem, right ? These are all\n> unrelated changes which are confusing the problem report and discussion.\n> And introducing behavior regressions, like renaming files with write\n> errors on\n> top of known good files.\n>\nIt is certainly on pgstat.c\nYes, I have not yet discovered the real cause.\nBut, while checking the code, I thought I could improve error checking, and\nto avoid creating a new thread about it, I took advantage of that thread.\nIt would certainly be better to separate, but this list is busy, I tried\nnot to create any more confusion.\nIf I can't record, I can't close and I can't rename, seeing this in the\nlogs, certainly, helps to solve the problem, not confuse.\nIn addition, the flow was cleaner, with fewer instructions and in this\nspecific case, it continues with the same expected behavior.\nOnly the rename message is showing, so the expected behavior is the same.\n\n\n> I think you'll want to 1) identify where the problem is occuring, and\n> attach a\n> debugger there.\n>\nI will try.\n\n\n>\n> 2) figure out when the problem was introduced. If this problem doesn't\n> happen\n> under v12:\n>\nI don't think so, but v12 I'm using the official distributed version.\n\n\n>\n> git log --cherry-pick -p origin/REL_12_STABLE...origin/REL_13_STABLE --\n> src/backend/postmaster/pgstat.c\n> or just:\n> git log -p origin/REL_12_STABLE.. src/backend/postmaster/pgstat.c\n>\n> You could try git-bisecting between v12..v13, but there's only 30 commits\n> which\n> touched pgstat.c (assuming that's where the ERROR is being thrown)\n\nThanks for the hit.\n\n\n> .\n>\n> Do you have a special value of stats_temp_directory?\n>\nNot.\n\n\n> Or a symlink or junction at pg_stat_tmp ?\n>\nNot. Only C driver (one volume)\n\nYou saw the patch regarding the flag, suggested.\n\n\n> > 1. statfile, is it really closed or does it not exist in the directory?\n> > There is no way to rename a file, which is open and in use.\n> > 2. Why delete (pgstat_stat_filename), if permanent is true:\n> > const char * statfile = permanent? PGSTAT_STAT_PERMANENT_FILENAME:\n> > pgstat_stat_filename;\n> > statfile is PGSTAT_STAT_PERMANENT_FILENAME and not pgstat_stat_filename\n>\n> You can find answers to a lot of questions in the git history. In this\n> case,\n> 70d756970 and 187492b6c.\n>\nOk,.\n\nThanks for answer Justin.\n\nregards,\nRanier Vilela\n\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>\nLivre\nde vírus. www.avast.com\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>.\n<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>\n\nEm dom., 14 de jun. de 2020 às 23:53, Justin Pryzby <pryzby@telsasoft.com> escreveu:On Fri, Jun 12, 2020 at 03:15:52PM -0300, Ranier Vilela wrote:\n> Posgres13_beta1, is consistently writing to the logs, \"could not rename\n> temporary statistics file\".\n> When analyzing the source that writes the log, I simplified the part that\n> writes the logs a little.\n\nWhat windows version and compiler ?Windows 10 (2004, msvc 2019 (64 bits)None configuration, only git clone and build.bat \n\nPlease show the full CSV log for this event, and not an excerpt.\nPreferably with several lines of \"context\" for the stats process PID, with\nlog_min_messages=debug or debug2 and log_error_verbosity=verbose, so that you\nget the file location where it's erroring, if you don't already know that.\n\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\nOk, I will provide. \n> 1. I changed from if else if to if\n> 2. For the user, better to have more errors recorded, which can help in\n> discovering the problem\n> 3. Errors are independent of each other\n> 4. If I can't release tmpfile, there's no way to delete it (unlink)\n> 5. If I can rename, there is no need to delete it (unlink) tmpfile.\n> \n> Attached is the patch that proposes these changes.\n> Now, the problem has not been solved.\n\nIt sounds like you haven't yet found the problem, right ? These are all\nunrelated changes which are confusing the problem report and discussion.\nAnd introducing behavior regressions, like renaming files with write errors on\ntop of known good files.It is certainly on pgstat.cYes, I have not yet discovered the real cause.But, while checking the code, I thought I could improve error checking, and to avoid creating a new thread about it, I took advantage of that thread. It would certainly be better to separate, but this list is busy, I tried not to create any more confusion.If I can't record, I can't close and I can't rename, seeing this in the logs, certainly, helps to solve the problem, not confuse.In addition, the flow was cleaner, with fewer instructions and in this specific case, it continues with the same expected behavior.Only the rename message is showing, so the expected behavior is the same. \n\nI think you'll want to 1) identify where the problem is occuring, and attach a\ndebugger there.I will try. \n\n2) figure out when the problem was introduced. If this problem doesn't happen\nunder v12:I don't think so, but v12 I'm using the official distributed version. \n\ngit log --cherry-pick -p origin/REL_12_STABLE...origin/REL_13_STABLE -- src/backend/postmaster/pgstat.c\nor just:\ngit log -p origin/REL_12_STABLE.. src/backend/postmaster/pgstat.c\n\nYou could try git-bisecting between v12..v13, but there's only 30 commits which\ntouched pgstat.c (assuming that's where the ERROR is being thrown)Thanks for the hit. .\n\nDo you have a special value of stats_temp_directory?Not. \nOr a symlink or junction at pg_stat_tmp ?Not. Only C driver (one volume) You saw the patch regarding the flag, suggested.\n\n> 1. statfile, is it really closed or does it not exist in the directory?\n> There is no way to rename a file, which is open and in use.\n> 2. Why delete (pgstat_stat_filename), if permanent is true:\n> const char * statfile = permanent? PGSTAT_STAT_PERMANENT_FILENAME:\n> pgstat_stat_filename;\n> statfile is PGSTAT_STAT_PERMANENT_FILENAME and not pgstat_stat_filename\n\nYou can find answers to a lot of questions in the git history. In this case,\n70d756970 and 187492b6c.Ok,.Thanks for answer Justin.regards,Ranier Vilela \n\n\nLivre de vírus. www.avast.com.",
"msg_date": "Mon, 15 Jun 2020 10:03:56 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "Attached a log.\n\nI hacked dirmod.c (pgrename), to print GetLastError();\n\nMoveFIleEx from: pg_stat_tmp/global.tmp\nMoveFIleEx to: pg_stat_tmp/global.stat\nMoveFIleEx win32 error code 5\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 15 Jun 2020 13:26:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "I can confirm that the problem is in pgrename (dirmod.c),\nsomething is not OK, with MoveFileEx, even with the\n(MOVEFILE_REPLACE_EXISTING) flag.\n\nReplacing MoveFileEx, with\nunlink (to);\nrename (from, to);\n\n#if defined (WIN32) &&! defined (__ CYGWIN__)\nunlink(to);\nwhile (rename (from, to)! = 0)\n#else\nwhile (rename (from, to) <0)\n#endif\n\nThe log messages have disappeared.\n\nI suspect that if the target (to) file exists, MoveFileEx, it is failing to\nrename, even with the flag enabled.\n\nWindows have the native rename function (\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/rename-wrename?view=vs-2019\n)\nHowever, it fails if the target (to) file exists.\n\nQuestion, is it acceptable delete the target file, if it exists, to\nguarantee the rename?\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 15 Jun 2020 23:49:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 09:49:31AM -0300, Ranier Vilela wrote:\n> II already reported on another thread, that vcregress is failing with\n> (float8 and partitionprune) and now these messages are showing up.\n> None buildfarm animal, have that combination, but as Postgres officially\n> supports it ..\n\nWell, it happens that I do have a VM with MSVC 2019 64-bit and Windows\n10 (17763.rs5_release.180914-1434 coming directly from Microsoft's\nwebsite). Testing with it, I am not seeing any issues related to\ntemporary file renames, though getting a disk full I could find some\nerrors in writing and/or closing temporary statistics file because of\nENOSPC but that's not a surprise in this situation, and regression\ntests just work fine. Honestly, I see nothing to act on based on the\ninformation provided in this thread.\n--\nMichael",
"msg_date": "Tue, 16 Jun 2020 12:52:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 11:49:33PM -0300, Ranier Vilela wrote:\n> I can confirm that the problem is in pgrename (dirmod.c),\n> something is not OK, with MoveFileEx, even with the\n> (MOVEFILE_REPLACE_EXISTING) flag.\n> \n> Replacing MoveFileEx, with\n> unlink (to);\n> rename (from, to);\n> \n> #if defined (WIN32) &&! defined (__ CYGWIN__)\n> unlink(to);\n> while (rename (from, to)! = 0)\n> #else\n> while (rename (from, to) <0)\n> #endif\n> \n> The log messages have disappeared.\n> \n> I suspect that if the target (to) file exists, MoveFileEx, it is failing to\n> rename, even with the flag enabled.\n> \n> Windows have the native rename function (\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/rename-wrename?view=vs-2019\n> )\n> However, it fails if the target (to) file exists.\n> \n> Question, is it acceptable delete the target file, if it exists, to\n> guarantee the rename?\n\nI don't think so - the idea behind writing $f.tmp and renaming it to $f is that\nit's *atomic*.\n\nI found an earlier thread addressing the same problem - we probably both\nshould've found this earlier.\nhttps://commitfest.postgresql.org/27/2230/\nhttps://www.postgresql.org/message-id/flat/CAPpHfds9trA6ipezK3BsuuOSQwEmESiqj8pkOxACFJpoLpcoNw%40mail.gmail.com#9b04576b717175e9dbf03cc991977d3f\n\nThat thread goes back to 2017, so I don't think this is a new problem in v13.\nI'm not sure why you wouldn't also see the same behavior in v12.\n\nThere's a couple patches in that thread, and the latest patch was rejected.\n\nMaybe you'd want to test them out and provide feedback.\n\nBTW, the first patch does:\n\n! if (filePresent)\n! {\n! if (ReplaceFile(to, from, NULL, REPLACEFILE_IGNORE_MERGE_ERRORS, 0, 0))\n! break;\n! }\n! else\n! {\n! if (MoveFileEx(from, to, MOVEFILE_REPLACE_EXISTING))\n! break;\n! }\n\nSince it's racy to first check if the file exists, I would've thought we should\ninstead do:\n\n\tret = ReplaceFile()\n\tif ret == OK:\n\t\tbreak\n\telse if ret == FileDoesntExist:\n\t\tif MoveFileEx():\n\t\t\tbreak\n\nOr, should we just try to create a file first, to allow ReplaceFile() to always\nwork ?\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 15 Jun 2020 23:10:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
},
{
"msg_contents": "Em ter., 16 de jun. de 2020 às 01:10, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Mon, Jun 15, 2020 at 11:49:33PM -0300, Ranier Vilela wrote:\n> > I can confirm that the problem is in pgrename (dirmod.c),\n> > something is not OK, with MoveFileEx, even with the\n> > (MOVEFILE_REPLACE_EXISTING) flag.\n> >\n> > Replacing MoveFileEx, with\n> > unlink (to);\n> > rename (from, to);\n> >\n> > #if defined (WIN32) &&! defined (__ CYGWIN__)\n> > unlink(to);\n> > while (rename (from, to)! = 0)\n> > #else\n> > while (rename (from, to) <0)\n> > #endif\n> >\n> > The log messages have disappeared.\n> >\n> > I suspect that if the target (to) file exists, MoveFileEx, it is failing\n> to\n> > rename, even with the flag enabled.\n> >\n> > Windows have the native rename function (\n> >\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/rename-wrename?view=vs-2019\n> > )\n> > However, it fails if the target (to) file exists.\n> >\n> > Question, is it acceptable delete the target file, if it exists, to\n> > guarantee the rename?\n>\n> I don't think so - the idea behind writing $f.tmp and renaming it to $f is\n> that\n> it's *atomic*.\n>\n\"atomic\" here means, rename operation only, see.\n1. create $f.tmp\n2. write\n3. close\n4. rename to $f\n\n\n>\n> I found an earlier thread addressing the same problem - we probably both\n> should've found this earlier.\n> https://commitfest.postgresql.org/27/2230/\n>\n> https://www.postgresql.org/message-id/flat/CAPpHfds9trA6ipezK3BsuuOSQwEmESiqj8pkOxACFJpoLpcoNw%40mail.gmail.com#9b04576b717175e9dbf03cc991977d3f\n\nVery interesting.\nThat link, gave me a light.\nhttps://www.virtualbox.org/ticket/2350\nUsers report this same situation in production environments with a high\nload.\nHere I have only one notebook, with 8gb ram, windows 10 64 bits (2004),\nmsvc 2019 (64 bits).\nWithout any load, just starting and stopping the server, the messages\nappear in the log.\n\n\n>\n>\n> That thread goes back to 2017, so I don't think this is a new problem in\n> v13.\n> I'm not sure why you wouldn't also see the same behavior in v12.\n>\nWindows Server 2003 (32 bits) with Postgres 9.6 (32 bits), none log (could\nrename).`\nWindows Server 2016 (64 bits) with Postgres 12 (64 bits), have log (cound\nrename).\n\nYes V12 can confirm, too.\n\nFile postgresql-Mon.log:\n2019-10-14 11:56:26 GMT [4844]: [1-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\nFile postgresql-Sat.log:\n2019-09-28 10:15:52 GMT [4804]: [2-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\n2019-10-05 12:01:23 GMT [4792]: [1-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\n2019-10-05 23:55:31 GMT [4792]: [2-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\nFile postgresql-Sun.log:\n2019-10-06 07:43:36 GMT [4792]: [3-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\nFile postgresql-Tue.log:\n2019-10-01 07:31:46 GMT [4804]: [3-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\n2019-10-22 14:44:25 GMT [3868]: [1-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\nFile postgresql-Wed.log:\n2019-10-02 22:20:52 GMT [4212]: [1-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\n2019-10-09 11:05:02 GMT [3712]: [1-1] user=,db=,app=,client= LOG: could\nnot rename temporary statistics file \"pg_stat_tmp/global.tmp\" to\n\"pg_stat_tmp/global.stat\": Permission denied\n\n\n>\n> There's a couple patches in that thread, and the latest patch was rejected.\n>\n> Maybe you'd want to test them out and provide feedback.\n>\nI will try.\n\n\n> BTW, the first patch does:\n>\n> ! if (filePresent)\n> ! {\n> ! if (ReplaceFile(to, from, NULL,\n> REPLACEFILE_IGNORE_MERGE_ERRORS, 0, 0))\n> ! break;\n> ! }\n> ! else\n> ! {\n> ! if (MoveFileEx(from, to,\n> MOVEFILE_REPLACE_EXISTING))\n> ! break;\n> ! }\n>\n> Since it's racy to first check if the file exists, I would've thought we\n> should\n> instead do:\n>\n> ret = ReplaceFile()\n> if ret == OK:\n> break\n> else if ret == FileDoesntExist:\n> if MoveFileEx():\n> break\n>\n> Or, should we just try to create a file first, to allow ReplaceFile() to\n> always\n> work ?\n>\nThis seemingly, it works too.\n+ while (rename(from, to) != 0 && !MoveFileEx(from, to,\nMOVEFILE_REPLACE_EXISTING | MOVEFILE_COPY_ALLOWED))\n\nAttached a patch.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 16 Jun 2020 10:14:42 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql13_beta1 (could not rename temporary statistics file)\n Windows 64bits"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently using EXPLAIN (ANALYZE) without TIMING OFF regularly changes\nthe resulting timing enough that the times aren't meaningful. E.g.\n\nCREATE TABLE lotsarows(key int not null);\nINSERT INTO lotsarows SELECT generate_series(1, 50000000);\nVACUUM FREEZE lotsarows;\n\n\n-- best of three:\nSELECT count(*) FROM lotsarows;\nTime: 1923.394 ms (00:01.923)\n\n-- best of three:\nEXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\nTime: 2319.830 ms (00:02.320)\n\n-- best of three:\nEXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\nTime: 4202.649 ms (00:04.203)\n\nThat nearly *double* the execution time without TIMING.\n\n\nLooking at a profile of this shows that we spend a good bit of cycles\n\"normalizing\" timstamps etc. That seems pretty unnecessary, just forced\non us due to struct timespec. So the first attached patch just turns\ninstr_time to be a 64bit integer, counting nanoseconds.\n\nThat helps, a tiny bit:\nEXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\nTime: 4179.302 ms (00:04.179)\n\nbut obviously doesn't move the needle.\n\n\nLooking at a profile it's easy to confirm that we spend a lot of time\nacquiring time:\n- 95.49% 0.00% postgres postgres [.] agg_retrieve_direct (inlined)\n - agg_retrieve_direct (inlined)\n - 79.27% fetch_input_tuple\n - ExecProcNode (inlined)\n - 75.72% ExecProcNodeInstr\n + 25.22% SeqNext\n - 21.74% InstrStopNode\n + 17.80% __GI___clock_gettime (inlined)\n - 21.44% InstrStartNode\n + 19.23% __GI___clock_gettime (inlined)\n + 4.06% ExecScan\n + 13.09% advance_aggregates (inlined)\n 1.06% MemoryContextReset\n\nAnd that's even though linux avoids a syscall (in most cases) etc to\nacquire the time. Unless the kernel detects there's a reason not to do\nso, linux does this by using 'rdtscp' and multiplying it by kernel\nprovided factors to turn the cycles into time.\n\nSome of the time is spent doing function calls, dividing into struct\ntimespec, etc. But most of it just the rdtscp instruction:\n 65.30 │1 63: rdtscp\n\n\nThe reason for that is largely that rdtscp waits until all prior\ninstructions have finished (but it allows later instructions to already\nstart). Multiple times for each tuple.\n\n\nIn the second attached prototype patch I've change instr_time to count\nin cpu cycles instead of nanoseconds. And then just turned the cycles\ninto seconds in INSTR_TIME_GET_DOUBLE() (more about that part later).\n\nWhen using rdtsc that results in *vastly* lower overhead:\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Aggregate (cost=846239.20..846239.21 rows=1 width=8) (actual time=2610.235..2610.235 rows=1 loops=1) │\n│ -> Seq Scan on lotsarows (cost=0.00..721239.16 rows=50000016 width=0) (actual time=0.006..1512.886 rows=50000000 loops=1) │\n│ Planning Time: 0.028 ms │\n│ Execution Time: 2610.256 ms │\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n(4 rows)\n\nTime: 2610.589 ms (00:02.611)\n\nAnd there's still some smaller improvements that could be made ontop of\nthat.\n\nAs a comparison, here's the time when using rdtscp directly in\ninstr_time, instead of going through clock_gettime:\nTime: 3481.162 ms (00:03.481)\n\nThat shows pretty well how big the cost of the added pipeline stalls\nare, and how important out-of-order execution is for decent\nperformance...\n\n\nIn my opinion, for the use in InstrStartNode(), InstrStopNode() etc, we\ndo *not* want to wait for prior instructions to finish, since that\nactually leads to the timing being less accurate, rather than\nmore. There are other cases where that'd be different, e.g. measuring\nhow long an entire query takes or such (but there it's probably\nirrelevant which to use).\n\n\nI've above skipped a bit over the details of how to turn the cycles\nreturned by rdtsc into time:\n\nOn x86 CPUs of the last ~12 years rdtsc doesn't return the cycles that\nhave actually been run, but instead returns the number of 'reference\ncycles'. That's important because otherwise things like turbo mode and\nlower power modes would lead to completely bogus times.\n\nThus, knowing the \"base frequency\" of the CPU allows to turn the\ndifference between two rdtsc return values into seconds.\n\nIn the attached prototype I just determined the number of cycles using\ncpuid(0x16). That's only available since Skylake (I think). On older\nCPUs we'd have to look at /proc/cpuinfo or\n/sys/devices/system/cpu/cpu0/cpufreq/base_frequency.\n\n\nThere's also other issues with using rdtsc directly: On older CPUs, in\nparticular older multi-socket systems, the tsc will not be synchronized\nin detail across cores. There's bits that'd let us check whether tsc is\nsuitable or not. The more current issue of that is that things like\nvirtual machines being migrated can lead to rdtsc suddenly returning a\ndifferent value / the frequency differening. But that is supposed to be\nsolved these days, by having virtualization technologies set frequency\nmultipliers and offsets which then cause rdtsc[p] to return something\nmeaningful, even after migration.\n\n\nThe attached patches are really just a prototype. I'm also not really\nplanning to work on getting this into a \"production ready\" patchset\nanytime soon. I developed it primarily because I found it the overhead\nmade it too hard to nail down in which part of a query tree performance\nchanged. If somebody else wants to continue from here...\n\nI do think it's be a pretty significant improvement if we could reduce\nthe timing overhead of EXPLAIN ANALYZE by this much. Even if requires a\nbunch of low-level code.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 12 Jun 2020 16:28:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "so 13. 6. 2020 v 1:28 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> Currently using EXPLAIN (ANALYZE) without TIMING OFF regularly changes\n> the resulting timing enough that the times aren't meaningful. E.g.\n>\n> CREATE TABLE lotsarows(key int not null);\n> INSERT INTO lotsarows SELECT generate_series(1, 50000000);\n> VACUUM FREEZE lotsarows;\n>\n>\n> -- best of three:\n> SELECT count(*) FROM lotsarows;\n> Time: 1923.394 ms (00:01.923)\n>\n> -- best of three:\n> EXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\n> Time: 2319.830 ms (00:02.320)\n>\n> -- best of three:\n> EXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\n> Time: 4202.649 ms (00:04.203)\n>\n> That nearly *double* the execution time without TIMING.\n>\n>\n> Looking at a profile of this shows that we spend a good bit of cycles\n> \"normalizing\" timstamps etc. That seems pretty unnecessary, just forced\n> on us due to struct timespec. So the first attached patch just turns\n> instr_time to be a 64bit integer, counting nanoseconds.\n>\n> That helps, a tiny bit:\n> EXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\n> Time: 4179.302 ms (00:04.179)\n>\n> but obviously doesn't move the needle.\n>\n>\n> Looking at a profile it's easy to confirm that we spend a lot of time\n> acquiring time:\n> - 95.49% 0.00% postgres postgres [.]\n> agg_retrieve_direct (inlined)\n> - agg_retrieve_direct (inlined)\n> - 79.27% fetch_input_tuple\n> - ExecProcNode (inlined)\n> - 75.72% ExecProcNodeInstr\n> + 25.22% SeqNext\n> - 21.74% InstrStopNode\n> + 17.80% __GI___clock_gettime (inlined)\n> - 21.44% InstrStartNode\n> + 19.23% __GI___clock_gettime (inlined)\n> + 4.06% ExecScan\n> + 13.09% advance_aggregates (inlined)\n> 1.06% MemoryContextReset\n>\n> And that's even though linux avoids a syscall (in most cases) etc to\n> acquire the time. Unless the kernel detects there's a reason not to do\n> so, linux does this by using 'rdtscp' and multiplying it by kernel\n> provided factors to turn the cycles into time.\n>\n> Some of the time is spent doing function calls, dividing into struct\n> timespec, etc. But most of it just the rdtscp instruction:\n> 65.30 │1 63: rdtscp\n>\n>\n> The reason for that is largely that rdtscp waits until all prior\n> instructions have finished (but it allows later instructions to already\n> start). Multiple times for each tuple.\n>\n>\n> In the second attached prototype patch I've change instr_time to count\n> in cpu cycles instead of nanoseconds. And then just turned the cycles\n> into seconds in INSTR_TIME_GET_DOUBLE() (more about that part later).\n>\n> When using rdtsc that results in *vastly* lower overhead:\n>\n> ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n> │ QUERY PLAN\n> │\n>\n> ├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n> │ Aggregate (cost=846239.20..846239.21 rows=1 width=8) (actual\n> time=2610.235..2610.235 rows=1 loops=1) │\n> │ -> Seq Scan on lotsarows (cost=0.00..721239.16 rows=50000016\n> width=0) (actual time=0.006..1512.886 rows=50000000 loops=1) │\n> │ Planning Time: 0.028 ms\n> │\n> │ Execution Time: 2610.256 ms\n> │\n>\n> └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n> (4 rows)\n>\n> Time: 2610.589 ms (00:02.611)\n>\n> And there's still some smaller improvements that could be made ontop of\n> that.\n>\n> As a comparison, here's the time when using rdtscp directly in\n> instr_time, instead of going through clock_gettime:\n> Time: 3481.162 ms (00:03.481)\n>\n> That shows pretty well how big the cost of the added pipeline stalls\n> are, and how important out-of-order execution is for decent\n> performance...\n>\n>\n> In my opinion, for the use in InstrStartNode(), InstrStopNode() etc, we\n> do *not* want to wait for prior instructions to finish, since that\n> actually leads to the timing being less accurate, rather than\n> more. There are other cases where that'd be different, e.g. measuring\n> how long an entire query takes or such (but there it's probably\n> irrelevant which to use).\n>\n>\n> I've above skipped a bit over the details of how to turn the cycles\n> returned by rdtsc into time:\n>\n> On x86 CPUs of the last ~12 years rdtsc doesn't return the cycles that\n> have actually been run, but instead returns the number of 'reference\n> cycles'. That's important because otherwise things like turbo mode and\n> lower power modes would lead to completely bogus times.\n>\n> Thus, knowing the \"base frequency\" of the CPU allows to turn the\n> difference between two rdtsc return values into seconds.\n>\n> In the attached prototype I just determined the number of cycles using\n> cpuid(0x16). That's only available since Skylake (I think). On older\n> CPUs we'd have to look at /proc/cpuinfo or\n> /sys/devices/system/cpu/cpu0/cpufreq/base_frequency.\n>\n>\n> There's also other issues with using rdtsc directly: On older CPUs, in\n> particular older multi-socket systems, the tsc will not be synchronized\n> in detail across cores. There's bits that'd let us check whether tsc is\n> suitable or not. The more current issue of that is that things like\n> virtual machines being migrated can lead to rdtsc suddenly returning a\n> different value / the frequency differening. But that is supposed to be\n> solved these days, by having virtualization technologies set frequency\n> multipliers and offsets which then cause rdtsc[p] to return something\n> meaningful, even after migration.\n>\n>\n> The attached patches are really just a prototype. I'm also not really\n> planning to work on getting this into a \"production ready\" patchset\n> anytime soon. I developed it primarily because I found it the overhead\n> made it too hard to nail down in which part of a query tree performance\n> changed. If somebody else wants to continue from here...\n>\n> I do think it's be a pretty significant improvement if we could reduce\n> the timing overhead of EXPLAIN ANALYZE by this much. Even if requires a\n> bunch of low-level code.\n>\n\n+1\n\nPavel\n\n\n> Greetings,\n>\n> Andres Freund\n>\n\nso 13. 6. 2020 v 1:28 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\r\nCurrently using EXPLAIN (ANALYZE) without TIMING OFF regularly changes\r\nthe resulting timing enough that the times aren't meaningful. E.g.\n\r\nCREATE TABLE lotsarows(key int not null);\r\nINSERT INTO lotsarows SELECT generate_series(1, 50000000);\r\nVACUUM FREEZE lotsarows;\n\n\r\n-- best of three:\r\nSELECT count(*) FROM lotsarows;\r\nTime: 1923.394 ms (00:01.923)\n\r\n-- best of three:\r\nEXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\r\nTime: 2319.830 ms (00:02.320)\n\r\n-- best of three:\r\nEXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\r\nTime: 4202.649 ms (00:04.203)\n\r\nThat nearly *double* the execution time without TIMING.\n\n\r\nLooking at a profile of this shows that we spend a good bit of cycles\r\n\"normalizing\" timstamps etc. That seems pretty unnecessary, just forced\r\non us due to struct timespec. So the first attached patch just turns\r\ninstr_time to be a 64bit integer, counting nanoseconds.\n\r\nThat helps, a tiny bit:\r\nEXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\r\nTime: 4179.302 ms (00:04.179)\n\r\nbut obviously doesn't move the needle.\n\n\r\nLooking at a profile it's easy to confirm that we spend a lot of time\r\nacquiring time:\r\n- 95.49% 0.00% postgres postgres [.] agg_retrieve_direct (inlined)\r\n - agg_retrieve_direct (inlined)\r\n - 79.27% fetch_input_tuple\r\n - ExecProcNode (inlined)\r\n - 75.72% ExecProcNodeInstr\r\n + 25.22% SeqNext\r\n - 21.74% InstrStopNode\r\n + 17.80% __GI___clock_gettime (inlined)\r\n - 21.44% InstrStartNode\r\n + 19.23% __GI___clock_gettime (inlined)\r\n + 4.06% ExecScan\r\n + 13.09% advance_aggregates (inlined)\r\n 1.06% MemoryContextReset\n\r\nAnd that's even though linux avoids a syscall (in most cases) etc to\r\nacquire the time. Unless the kernel detects there's a reason not to do\r\nso, linux does this by using 'rdtscp' and multiplying it by kernel\r\nprovided factors to turn the cycles into time.\n\r\nSome of the time is spent doing function calls, dividing into struct\r\ntimespec, etc. But most of it just the rdtscp instruction:\r\n 65.30 │1 63: rdtscp\n\n\r\nThe reason for that is largely that rdtscp waits until all prior\r\ninstructions have finished (but it allows later instructions to already\r\nstart). Multiple times for each tuple.\n\n\r\nIn the second attached prototype patch I've change instr_time to count\r\nin cpu cycles instead of nanoseconds. And then just turned the cycles\r\ninto seconds in INSTR_TIME_GET_DOUBLE() (more about that part later).\n\r\nWhen using rdtsc that results in *vastly* lower overhead:\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN │\r\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Aggregate (cost=846239.20..846239.21 rows=1 width=8) (actual time=2610.235..2610.235 rows=1 loops=1) │\r\n│ -> Seq Scan on lotsarows (cost=0.00..721239.16 rows=50000016 width=0) (actual time=0.006..1512.886 rows=50000000 loops=1) │\r\n│ Planning Time: 0.028 ms │\r\n│ Execution Time: 2610.256 ms │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(4 rows)\n\r\nTime: 2610.589 ms (00:02.611)\n\r\nAnd there's still some smaller improvements that could be made ontop of\r\nthat.\n\r\nAs a comparison, here's the time when using rdtscp directly in\r\ninstr_time, instead of going through clock_gettime:\r\nTime: 3481.162 ms (00:03.481)\n\r\nThat shows pretty well how big the cost of the added pipeline stalls\r\nare, and how important out-of-order execution is for decent\r\nperformance...\n\n\r\nIn my opinion, for the use in InstrStartNode(), InstrStopNode() etc, we\r\ndo *not* want to wait for prior instructions to finish, since that\r\nactually leads to the timing being less accurate, rather than\r\nmore. There are other cases where that'd be different, e.g. measuring\r\nhow long an entire query takes or such (but there it's probably\r\nirrelevant which to use).\n\n\r\nI've above skipped a bit over the details of how to turn the cycles\r\nreturned by rdtsc into time:\n\r\nOn x86 CPUs of the last ~12 years rdtsc doesn't return the cycles that\r\nhave actually been run, but instead returns the number of 'reference\r\ncycles'. That's important because otherwise things like turbo mode and\r\nlower power modes would lead to completely bogus times.\n\r\nThus, knowing the \"base frequency\" of the CPU allows to turn the\r\ndifference between two rdtsc return values into seconds.\n\r\nIn the attached prototype I just determined the number of cycles using\r\ncpuid(0x16). That's only available since Skylake (I think). On older\r\nCPUs we'd have to look at /proc/cpuinfo or\r\n/sys/devices/system/cpu/cpu0/cpufreq/base_frequency.\n\n\r\nThere's also other issues with using rdtsc directly: On older CPUs, in\r\nparticular older multi-socket systems, the tsc will not be synchronized\r\nin detail across cores. There's bits that'd let us check whether tsc is\r\nsuitable or not. The more current issue of that is that things like\r\nvirtual machines being migrated can lead to rdtsc suddenly returning a\r\ndifferent value / the frequency differening. But that is supposed to be\r\nsolved these days, by having virtualization technologies set frequency\r\nmultipliers and offsets which then cause rdtsc[p] to return something\r\nmeaningful, even after migration.\n\n\r\nThe attached patches are really just a prototype. I'm also not really\r\nplanning to work on getting this into a \"production ready\" patchset\r\nanytime soon. I developed it primarily because I found it the overhead\r\nmade it too hard to nail down in which part of a query tree performance\r\nchanged. If somebody else wants to continue from here...\n\r\nI do think it's be a pretty significant improvement if we could reduce\r\nthe timing overhead of EXPLAIN ANALYZE by this much. Even if requires a\r\nbunch of low-level code.+1Pavel\n\r\nGreetings,\n\r\nAndres Freund",
"msg_date": "Sat, 13 Jun 2020 05:53:01 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Sat, Jun 13, 2020 at 11:28 AM Andres Freund <andres@anarazel.de> wrote:\n> [PATCH v1 1/2] WIP: Change instr_time to just store nanoseconds, that's cheaper.\n\nMakes a lot of sense. If we do this, I'll need to update pgbench,\nwhich just did something similar locally. If I'd been paying\nattention to this thread I might not have committed that piece of the\nrecent pgbench changes, but it's trivial stuff and I'll be happy to\ntidy that up when the time comes.\n\n> [PATCH v1 2/2] WIP: Use cpu reference cycles, via rdtsc, to measure time for instrumentation.\n\n> Some of the time is spent doing function calls, dividing into struct\n> timespec, etc. But most of it just the rdtscp instruction:\n> 65.30 │1 63: rdtscp\n\n> The reason for that is largely that rdtscp waits until all prior\n> instructions have finished (but it allows later instructions to already\n> start). Multiple times for each tuple.\n\nYeah, after reading a bit about this, I agree that there is no reason\nto think that the stalling version makes the answer better in any way.\nIt might make sense if you use it once at the beginning of a large\ncomputation, but it makes no sense if you sprinkle it around inside\nblocks that will run multiple times. It destroys your\ninstructions-per-cycle while, turning your fancy super scalar Pentium\ninto a 486. It does raise some interesting questions about what\nexactly you're measuring, though: I don't know enough to have a good\ngrip on how far out of order the TSC could be read!\n\n> There's also other issues with using rdtsc directly: On older CPUs, in\n> particular older multi-socket systems, the tsc will not be synchronized\n> in detail across cores. There's bits that'd let us check whether tsc is\n> suitable or not. The more current issue of that is that things like\n> virtual machines being migrated can lead to rdtsc suddenly returning a\n> different value / the frequency differening. But that is supposed to be\n> solved these days, by having virtualization technologies set frequency\n> multipliers and offsets which then cause rdtsc[p] to return something\n> meaningful, even after migration.\n\nGoogling tells me that Nehalem (2008) introduced \"invariant TSC\"\n(clock rate independent) and also socket synchronisation at the same\ntime, so systems without it are already pretty long in the tooth.\n\nA quick peek at an AMD manual[1] tells me that a similar change\nhappened in 15H/Bulldozer/Piledriver/Steamroller/Excavator (2011),\nidentified with the same CPUID test.\n\nMy first reaction is that it seems like TSC would be the least of your\nworries if you're measuring a VM that's currently migrating between\nhosts, but maybe the idea is just that you have to make sure you don't\nassume it can't ever go backwards or something like that...\n\nGoogle Benchmark has some clues about how to spell this on MSVC, what\nsome instructions might be to research on ARM, etc.\n\n[1] https://www.amd.com/system/files/TechDocs/47414_15h_sw_opt_guide.pdf\n(page 373)\n[2] https://github.com/google/benchmark/blob/master/src/cycleclock.h\n\n\n",
"msg_date": "Sat, 13 Mar 2021 13:34:16 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 4:28 PM Andres Freund <andres@anarazel.de> wrote:\n\n> The attached patches are really just a prototype. I'm also not really\n> planning to work on getting this into a \"production ready\" patchset\n> anytime soon. I developed it primarily because I found it the overhead\n> made it too hard to nail down in which part of a query tree performance\n> changed. If somebody else wants to continue from here...\n>\n> I do think it's be a pretty significant improvement if we could reduce\n> the timing overhead of EXPLAIN ANALYZE by this much. Even if requires a\n> bunch of low-level code.\n>\n\nBased on an off-list conversation with Andres, I decided to dust off this\nold\npatch for using rdtsc directly. The significant EXPLAIN ANALYZE performance\nimprovements (especially when using rdtsc instead of rdtsc*p*) seem to\nwarrant\ngiving this a more thorough look.\n\nSee attached an updated patch (adding it to the July commitfest), with a few\nchanges:\n\n- Keep using clock_gettime() as a fallback if we decide to not use rdtsc\n- Fallback to /proc/cpuinfo for clock frequency, if cpuid(0x16) doesn't work\n- The decision to use rdtsc (or not) is made at runtime, in the new\n INSTR_TIME_INITIALIZE() -- we can't make this decision at compile time\n because this is dependent on the specific CPU in use, amongst other things\n- In an abundance of caution, for now I've decided to only enable this if we\n are on Linux/x86, and the current kernel clocksource is TSC (the kernel\nhas\n quite sophisticated logic around making this decision, see [1])\n\nNote that if we implemented the decision logic ourselves (instead of relying\non the Linux kernel), I'd be most worried about older virtualization\ntechnology. In my understanding getting this right is notably more\ncomplicated\nthan just checking cpuid, see [2].\n\nKnown WIP problems with this patch version:\n\n* There appears to be a timing discrepancy I haven't yet worked out, where\n the \\timing data reported by psql doesn't match what EXPLAIN ANALYZE is\n reporting. With Andres' earlier test case, I'm seeing a consistent ~700ms\n higher for \\timing than for the EXPLAIN ANALYZE time reported on the\nserver\n side, only when rdtsc measurement is used -- its likely there is a problem\n somewhere with how we perform the cycles to time conversion\n* Possibly related, the floating point handling for the cycles_to_sec\nvariable\n is problematic in terms of precision (see FIXME, taken over from Andres'\nPOC)\n\nOpen questions from me:\n\n1) Do we need to account for different TSC offsets on different CPUs in SMP\n systems? (the Linux kernel certainly has logic to that extent, but [3]\n suggests this is no longer a problem on Nehalem and newer chips, i.e.\nthose\n having an invariant TSC)\n\n2) Should we have a setting \"--with-tsc\" for configure? (instead of always\n enabling it when on Linux/x86 with a TSC clocksource)\n\n3) Are there cases where we actually want to use rdtsc*p*? (i.e. wait for\n current instructions to finish -- the prior discussion seemed to suggest\n we don't want it for node instruction measurements, but possibly we do\nwant\n this in other cases?)\n\n4) Should we support using the \"mrs\" instruction on ARM? (which is similar\nto\n rdtsc, see [4])\n\nThanks,\nLukas\n\n[1] https://github.com/torvalds/linux/blob/master/arch/x86/kernel/tsc.c\n[2] http://oliveryang.net/2015/09/pitfalls-of-TSC-usage/\n[3] https://stackoverflow.com/a/11060619/1652607\n[4] https://cpufun.substack.com/p/fun-with-timers-and-cpuid\n\n-- \nLukas Fittl",
"msg_date": "Fri, 1 Jul 2022 01:23:01 -0700",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-01 01:23:01 -0700, Lukas Fittl wrote:\n> On Fri, Jun 12, 2020 at 4:28 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > The attached patches are really just a prototype. I'm also not really\n> > planning to work on getting this into a \"production ready\" patchset\n> > anytime soon. I developed it primarily because I found it the overhead\n> > made it too hard to nail down in which part of a query tree performance\n> > changed. If somebody else wants to continue from here...\n> >\n> > I do think it's be a pretty significant improvement if we could reduce\n> > the timing overhead of EXPLAIN ANALYZE by this much. Even if requires a\n> > bunch of low-level code.\n> >\n> \n> Based on an off-list conversation with Andres, I decided to dust off this\n> old\n> patch for using rdtsc directly. The significant EXPLAIN ANALYZE performance\n> improvements (especially when using rdtsc instead of rdtsc*p*) seem to\n> warrant\n> giving this a more thorough look.\n> \n> See attached an updated patch (adding it to the July commitfest), with a few\n> changes:\n> \n> - Keep using clock_gettime() as a fallback if we decide to not use rdtsc\n\nYep.\n\n\n> - Fallback to /proc/cpuinfo for clock frequency, if cpuid(0x16) doesn't work\n\nI suspect that this might not be needed anymore. Seems like it'd be ok to just\nfall back to clock_gettime() in that case.\n\n\n> - In an abundance of caution, for now I've decided to only enable this if we\n> are on Linux/x86, and the current kernel clocksource is TSC (the kernel\n> has\n> quite sophisticated logic around making this decision, see [1])\n\nI think our requirements are a bit lower than the kernel's - we're not\ntracking wall clock over an extended period...\n\n\n> Note that if we implemented the decision logic ourselves (instead of relying\n> on the Linux kernel), I'd be most worried about older virtualization\n> technology. In my understanding getting this right is notably more\n> complicated\n> than just checking cpuid, see [2].\n\n\n> Known WIP problems with this patch version:\n> \n> * There appears to be a timing discrepancy I haven't yet worked out, where\n> the \\timing data reported by psql doesn't match what EXPLAIN ANALYZE is\n> reporting. With Andres' earlier test case, I'm seeing a consistent ~700ms\n> higher for \\timing than for the EXPLAIN ANALYZE time reported on the\n> server\n> side, only when rdtsc measurement is used -- its likely there is a problem\n> somewhere with how we perform the cycles to time conversion\n\nCould you explain a bit more what you're seeing? I just tested your patches\nand didn't see that here.\n\n\n> * Possibly related, the floating point handling for the cycles_to_sec\n> variable\n> is problematic in terms of precision (see FIXME, taken over from Andres'\n> POC)\n\nAnd probably also performance...\n\n\n> Open questions from me:\n> \n> 1) Do we need to account for different TSC offsets on different CPUs in SMP\n> systems? (the Linux kernel certainly has logic to that extent, but [3]\n> suggests this is no longer a problem on Nehalem and newer chips, i.e.\n> those\n> having an invariant TSC)\n\nI don't think we should cater to systems where we need that.\n\n\n> 2) Should we have a setting \"--with-tsc\" for configure? (instead of always\n> enabling it when on Linux/x86 with a TSC clocksource)\n\nProbably not worth it.\n\n\n> 3) Are there cases where we actually want to use rdtsc*p*? (i.e. wait for\n> current instructions to finish -- the prior discussion seemed to suggest\n> we don't want it for node instruction measurements, but possibly we do\n> want\n> this in other cases?)\n\nI was wondering about that too... Perhaps we should add a\nINSTR_TIME_SET_CURRENT_BARRIER() or such?\n\n\n> 4) Should we support using the \"mrs\" instruction on ARM? (which is similar\n> to\n> rdtsc, see [4])\n\nI'd leave that for later personally.\n\n\n\n> #define NS_PER_S INT64CONST(1000000000)\n> #define US_PER_S INT64CONST(1000000)\n> #define MS_PER_S INT64CONST(1000)\n> @@ -95,17 +104,37 @@ typedef int64 instr_time;\n> \n> #define INSTR_TIME_SET_ZERO(t)\t((t) = 0)\n> \n> -static inline instr_time pg_clock_gettime_ns(void)\n> +extern double cycles_to_sec;\n> +\n> +bool use_rdtsc;\n\nThis should be extern and inside the ifdef below.\n\n\n> +#if defined(__x86_64__) && defined(__linux__)\n> +extern void pg_clock_gettime_initialize_rdtsc(void);\n> +#endif\n> +\n> +static inline instr_time pg_clock_gettime_ref_cycles(void)\n> {\n> \tstruct timespec tmp;\n> \n> +#if defined(__x86_64__) && defined(__linux__)\n> +\tif (use_rdtsc)\n> +\t\treturn __rdtsc();\n> +#endif\n> +\n> \tclock_gettime(PG_INSTR_CLOCK, &tmp);\n> \n> \treturn tmp.tv_sec * NS_PER_S + tmp.tv_nsec;\n> }\n> \n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 1 Jul 2022 10:26:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "I ran that original test case with and without the patch. Here are the\nnumbers I'm seeing:\n\nmaster (best of three):\n\npostgres=# SELECT count(*) FROM lotsarows;\nTime: 582.423 ms\n\npostgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\nTime: 616.102 ms\n\npostgres=# EXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\nTime: 1068.700 ms (00:01.069)\n\npatched (best of three):\n\npostgres=# SELECT count(*) FROM lotsarows;\nTime: 550.822 ms\n\npostgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\nTime: 612.572 ms\n\npostgres=# EXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\nTime: 690.875 ms\n\nOn Fri, Jul 1, 2022 at 10:26 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-01 01:23:01 -0700, Lukas Fittl wrote:\n>...\n> > Known WIP problems with this patch version:\n> >\n> > * There appears to be a timing discrepancy I haven't yet worked out, where\n> > the \\timing data reported by psql doesn't match what EXPLAIN ANALYZE is\n> > reporting. With Andres' earlier test case, I'm seeing a consistent ~700ms\n> > higher for \\timing than for the EXPLAIN ANALYZE time reported on the\n> > server\n> > side, only when rdtsc measurement is used -- its likely there is a problem\n> > somewhere with how we perform the cycles to time conversion\n>\n> Could you explain a bit more what you're seeing? I just tested your patches\n> and didn't see that here.\n\nI did not see this either, but I did see that the execution time\nreported by \\timing is (for this test case) consistently 0.5-1ms\n*lower* than the Execution Time reported by EXPLAIN. I did not see\nthat on master. Is that expected?\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Fri, 15 Jul 2022 11:21:51 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 11:22 PM Maciek Sakrejda <m.sakrejda@gmail.com>\nwrote:\n\n> I ran that original test case with and without the patch. Here are the\n> numbers I'm seeing:\n>\n> master (best of three):\n>\n> postgres=# SELECT count(*) FROM lotsarows;\n> Time: 582.423 ms\n>\n> postgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\n> Time: 616.102 ms\n>\n> postgres=# EXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\n> Time: 1068.700 ms (00:01.069)\n>\n> patched (best of three):\n>\n> postgres=# SELECT count(*) FROM lotsarows;\n> Time: 550.822 ms\n>\n> postgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\n> Time: 612.572 ms\n>\n> postgres=# EXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\n> Time: 690.875 ms\n>\n> On Fri, Jul 1, 2022 at 10:26 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-07-01 01:23:01 -0700, Lukas Fittl wrote:\n> >...\n> > > Known WIP problems with this patch version:\n> > >\n> > > * There appears to be a timing discrepancy I haven't yet worked out,\n> where\n> > > the \\timing data reported by psql doesn't match what EXPLAIN ANALYZE\n> is\n> > > reporting. With Andres' earlier test case, I'm seeing a consistent\n> ~700ms\n> > > higher for \\timing than for the EXPLAIN ANALYZE time reported on the\n> > > server\n> > > side, only when rdtsc measurement is used -- its likely there is a\n> problem\n> > > somewhere with how we perform the cycles to time conversion\n> >\n> > Could you explain a bit more what you're seeing? I just tested your\n> patches\n> > and didn't see that here.\n>\n> I did not see this either, but I did see that the execution time\n> reported by \\timing is (for this test case) consistently 0.5-1ms\n> *lower* than the Execution Time reported by EXPLAIN. I did not see\n> that on master. Is that expected?\n>\n> Thanks,\n> Maciek\n>\n>\n> The patch requires a rebase; please rebase the patch with the latest code.\n\nHunk #5 succeeded at 147 with fuzz 2 (offset -3 lines).\nHunk #6 FAILED at 170.\nHunk #7 succeeded at 165 (offset -69 lines).\n2 out of 7 hunks FAILED -- saving rejects to file\nsrc/include/portability/instr_time.h.rej\npatching file src/tools/msvc/Mkvcbuild.pm\n\n\n\n-- \nIbrar Ahmed\n\nOn Fri, Jul 15, 2022 at 11:22 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:I ran that original test case with and without the patch. Here are the\nnumbers I'm seeing:\n\nmaster (best of three):\n\npostgres=# SELECT count(*) FROM lotsarows;\nTime: 582.423 ms\n\npostgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\nTime: 616.102 ms\n\npostgres=# EXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\nTime: 1068.700 ms (00:01.069)\n\npatched (best of three):\n\npostgres=# SELECT count(*) FROM lotsarows;\nTime: 550.822 ms\n\npostgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM lotsarows;\nTime: 612.572 ms\n\npostgres=# EXPLAIN (ANALYZE, TIMING ON) SELECT count(*) FROM lotsarows;\nTime: 690.875 ms\n\nOn Fri, Jul 1, 2022 at 10:26 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-01 01:23:01 -0700, Lukas Fittl wrote:\n>...\n> > Known WIP problems with this patch version:\n> >\n> > * There appears to be a timing discrepancy I haven't yet worked out, where\n> > the \\timing data reported by psql doesn't match what EXPLAIN ANALYZE is\n> > reporting. With Andres' earlier test case, I'm seeing a consistent ~700ms\n> > higher for \\timing than for the EXPLAIN ANALYZE time reported on the\n> > server\n> > side, only when rdtsc measurement is used -- its likely there is a problem\n> > somewhere with how we perform the cycles to time conversion\n>\n> Could you explain a bit more what you're seeing? I just tested your patches\n> and didn't see that here.\n\nI did not see this either, but I did see that the execution time\nreported by \\timing is (for this test case) consistently 0.5-1ms\n*lower* than the Execution Time reported by EXPLAIN. I did not see\nthat on master. Is that expected?\n\nThanks,\nMaciek\n\nThe patch requires a rebase; please rebase the patch with the latest code.Hunk #5 succeeded at 147 with fuzz 2 (offset -3 lines).\nHunk #6 FAILED at 170.\nHunk #7 succeeded at 165 (offset -69 lines).\n2 out of 7 hunks FAILED -- saving rejects to file src/include/portability/instr_time.h.rej\npatching file src/tools/msvc/Mkvcbuild.pm -- Ibrar Ahmed",
"msg_date": "Tue, 6 Sep 2022 11:32:18 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 11:32:18AM +0500, Ibrar Ahmed wrote:\n> Hunk #5 succeeded at 147 with fuzz 2 (offset -3 lines).\n> Hunk #6 FAILED at 170.\n> Hunk #7 succeeded at 165 (offset -69 lines).\n> 2 out of 7 hunks FAILED -- saving rejects to file\n> src/include/portability/instr_time.h.rej\n> patching file src/tools/msvc/Mkvcbuild.pm\n\nNo rebased version has been sent since this update, so this patch has\nbeen marked as RwF.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 17:33:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "I think it would be great to get this patch committed. Beyond the \nreasons already mentioned, the significant overhead also tends to skew \nthe reported runtimes in ways that makes it difficult to compare them. \nFor example, if two nodes are executed equally often but one needs twice \nthe time to process the rows: in such a case EXPLAIN ANALYZE should \nreport timings that are 2x apart. However, currently, the high overhead \nof clock_gettime() tends to skew the relative runtimes.\n\nOn 10/12/22 10:33, Michael Paquier wrote:\n> No rebased version has been sent since this update, so this patch has\n> been marked as RwF.\n\nI've rebased the patch set on latest master and fixed a few compiler \nwarnings. Beyond that some findings and thoughts:\n\nYou're only using RDTSC if the clock source is 'tsc'. Great idea to not \nbother caring about a lot of hairy TSC details. Looking at the kernel \ncode this seems to imply that the TSC is frequency invariant. I don't \nthink though that this implies that Linux is not running under a \nhypervisor; which is good because I assume PostgreSQL is used a lot in \nVMs. However, when running under a hypervisor (at least with VMWare) \nCPUID leaf 0x16 is not available. In my tests __get_cpuid() indicated \nsuccess but the returned values were garbage. Instead of using leaf \n0x16, we should then use the hypervisor interface to obtain the TSC \nfrequency. Checking if a hypervisor is active can be done via:\n\nbool IsHypervisorActive()\n{\n uint32 cpuinfo[4] = {0};\n int res = __get_cpuid(0x1, &cpuinfo[0], &cpuinfo[1], &cpuinfo[2], \n&cpuinfo[3]);\n return res > 0 && (cpuinfo[2] & (1 << 30));\n}\n\nObtaining the TSC frequency via the hypervisor interface can be done \nwith the following code. See https://lwn.net/Articles/301888/ for more \ndetails.\n\n// Under hypervisors (tested with VMWare) leaf 0x16 is not available, \neven though __get_cpuid() succeeds.\n// Hence, if running under a hypervisor, use the hypervisor interface to \nobtain TSC frequency.\nuint32 cpuinfo[4] = {0};\nif (IsHypervisorActive() && __get_cpuid(0x40000001, &cpuinfo[0], \n&cpuinfo[1], &cpuinfo[2], &cpuinfo[3]) > 0)\n cycles_to_sec = 1.0 / ((double)cpuinfo[0] * 1000 * 1000);\n\nGiven that we anyways switch between RDTSC and clock_gettime() with a \nglobal variable, what about exposing the clock source as GUC? That way \nthe user can switch back to a working clock source in case we miss a \ndetail around activating or reading the TSC.\n\nI'm happy to update the patches accordingly.\n\n--\nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Sat, 19 Nov 2022 21:05:28 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "I missed attaching the patches.\n\n-- \nDavid Geier\n(ServiceNow)",
"msg_date": "Sat, 19 Nov 2022 21:06:23 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nI re-based again on master and applied the following changes:\n\nI removed the fallback for obtaining the TSC frequency from /proc/cpu as \nsuggested by Andres. Worst-case we fall back to clock_gettime().\n\nI added code to obtain the TSC frequency via CPUID when under a \nhypervisor. I had to use __cpuid() directly instead of __get_cpuid(), \nbecause __get_cpuid() returns an error if the leaf is > 0x80000000 \n(probably the implementation pre-dates the hypervisor timing leafs). \nUnfortunately, while testing my implementation under VMWare, I found \nthat RDTSC runs awfully slow there (like 30x slower). [1] indicates that \nwe cannot generally rely on RDTSC being actually fast on VMs. However, \nthe same applies to clock_gettime(). It runs as slow as RDTSC on my \nVMWare setup. Hence, using RDTSC is not at disadvantage. I'm not \nentirely sure if there aren't cases where e.g. clock_gettime() is \nactually faster than RDTSC and it would be advantageous to use \nclock_gettime(). We could add a GUC so that the user can decide which \nclock source to use. Any thoughts?\n\nI also somewhat improved the accuracy of the cycles to milli- and \nmicroseconds conversion functions by having two more multipliers with \nhigher precision. For microseconds we could also keep the computation \ninteger-only. I'm wondering what to best do for seconds and \nmilliseconds. I'm currently leaning towards just keeping it as is, \nbecause the durations measured and converted are usually long enough \nthat precision shouldn't be a problem.\n\nIn vacuum_lazy.c we do if ((INSTR_TIME_GET_MICROSEC(elapsed) / 1000). I \nchanged that to use INSTR_TIME_GET_MILLISECS() instead. Additionally, I \ninitialized a few variables of type instr_time which otherwise resulted \nin warnings due to use of potentially uninitialized variables.\n\nI also couldn't reproduce the reported timing discrepancy. For me the \nruntime reported by \\timing is just slightly higher than the time \nreported by EXPLAIN ANALYZE, which is expected.\n\nBeyond that:\n\nWhat about renaming INSTR_TIME_GET_DOUBLE() to INSTR_TIME_GET_SECS() so \nthat it's consistent with the _MILLISEC() and _MICROSEC() variants?\n\nThe INSTR_TIME_GET_MICROSEC() returns a uint64 while the other variants \nreturn double. This seems error prone. What about renaming the function \nor also have the function return a double and cast where necessary at \nthe call site?\n\nIf no one objects I would also re-register this patch in the commit fest.\n\n[1] \nhttps://vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/Timekeeping-In-VirtualMachines.pdf \n(page 11 \"Virtual TSC\")\n\n-- \nDavid Geier\n(ServiceNow)",
"msg_date": "Mon, 2 Jan 2023 14:28:20 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi David,\n\nThanks for continuing to work on this patch, and my apologies for silence\non the patch.\n\nIts been hard to make time, and especially so because I typically develop\non an ARM-based macOS system where I can't test this directly - hence my\ntests with virtualized EC2 instances, where I ran into the timing oddities.\n\nOn Mon, Jan 2, 2023 at 5:28 AM David Geier <geidav.pg@gmail.com> wrote:\n\n> The INSTR_TIME_GET_MICROSEC() returns a uint64 while the other variants\n> return double. This seems error prone. What about renaming the function\n> or also have the function return a double and cast where necessary at\n> the call site?\n>\n\nMinor note, but in my understanding using a uint64 (where we can) is faster\nfor any simple arithmetic we do with the values.\n\n\n> If no one objects I would also re-register this patch in the commit fest.\n>\n\n+1, and feel free to carry this patch forward - I'll try to make an effort\nto review my earlier testing issues again, as well as your later\nimprovements to the patch.\n\nAlso, FYI, I just posted an alternate idea for speeding up EXPLAIN ANALYZE\nwith timing over in [0], using a sampling-based approach to reduce the\ntiming overhead.\n\n[0]:\nhttps://www.postgresql.org/message-id/CAP53PkxXMk0j-%2B0%3DYwRti2pFR5UB2Gu4v2Oyk8hhZS0DRART6g%40mail.gmail.com\n\nThanks,\nLukas\n\n-- \nLukas Fittl\n\nHi David,Thanks for continuing to work on this patch, and my apologies for silence on the patch.Its been hard to make time, and especially so because I typically develop on an ARM-based macOS system where I can't test this directly - hence my tests with virtualized EC2 instances, where I ran into the timing oddities.On Mon, Jan 2, 2023 at 5:28 AM David Geier <geidav.pg@gmail.com> wrote:\nThe INSTR_TIME_GET_MICROSEC() returns a uint64 while the other variants \nreturn double. This seems error prone. What about renaming the function \nor also have the function return a double and cast where necessary at \nthe call site?Minor note, but in my understanding using a uint64 (where we can) is faster for any simple arithmetic we do with the values. \nIf no one objects I would also re-register this patch in the commit fest.+1, and feel free to carry this patch forward - I'll try to make an effort to review my earlier testing issues again, as well as your later improvements to the patch.Also, FYI, I just posted an alternate idea for speeding up EXPLAIN ANALYZE with timing over in [0], using a sampling-based approach to reduce the timing overhead.[0]: https://www.postgresql.org/message-id/CAP53PkxXMk0j-%2B0%3DYwRti2pFR5UB2Gu4v2Oyk8hhZS0DRART6g%40mail.gmail.comThanks,Lukas-- Lukas Fittl",
"msg_date": "Mon, 2 Jan 2023 11:50:04 -0800",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 11:21 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> On Fri, Jul 1, 2022 at 10:26 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-07-01 01:23:01 -0700, Lukas Fittl wrote:\n> >...\n> > > Known WIP problems with this patch version:\n> > >\n> > > * There appears to be a timing discrepancy I haven't yet worked out, where\n> > > the \\timing data reported by psql doesn't match what EXPLAIN ANALYZE is\n> > > reporting. With Andres' earlier test case, I'm seeing a consistent ~700ms\n> > > higher for \\timing than for the EXPLAIN ANALYZE time reported on the\n> > > server\n> > > side, only when rdtsc measurement is used -- its likely there is a problem\n> > > somewhere with how we perform the cycles to time conversion\n> >\n> > Could you explain a bit more what you're seeing? I just tested your patches\n> > and didn't see that here.\n>\n> I did not see this either, but I did see that the execution time\n> reported by \\timing is (for this test case) consistently 0.5-1ms\n> *lower* than the Execution Time reported by EXPLAIN. I did not see\n> that on master. Is that expected?\n\nFor what it's worth, I can no longer reproduce this. In fact, I went\nback to master-as-of-around-then and applied Lukas' v2 patches again,\nand I still can't reproduce that. I do remember it happening\nconsistently across several executions, but now \\timing consistently\nshows 0.5-1ms slower, as expected. This does not explain the different\ntiming issue Lukas was seeing in his tests, but I think we can assume\nwhat I reported originally here is not an issue.\n\n\n",
"msg_date": "Mon, 2 Jan 2023 12:44:42 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi Lukas,\n\nOn 1/2/23 20:50, Lukas Fittl wrote:\n> Thanks for continuing to work on this patch, and my apologies for \n> silence on the patch.\n\nIt would be great if you could review it.\nPlease also share your thoughts around exposing the used clock source as \nGUC and renaming INSTR_TIME_GET_DOUBLE() to _SECS().\n\nI rebased again on master because of [1]. Patches attached.\n\n>\n> Its been hard to make time, and especially so because I typically \n> develop on an ARM-based macOS system where I can't test this directly \n> - hence my tests with virtualized EC2 instances, where I ran into the \n> timing oddities.\nThat's good and bad. Bad to do the development and good to test the \nimplementation on more virtualized setups; given that I also encountered \n\"interesting\" behavior on VMWare (see my previous mails).\n>\n> On Mon, Jan 2, 2023 at 5:28 AM David Geier <geidav.pg@gmail.com> wrote:\n>\n> The INSTR_TIME_GET_MICROSEC() returns a uint64 while the other\n> variants\n> return double. This seems error prone. What about renaming the\n> function\n> or also have the function return a double and cast where necessary at\n> the call site?\n>\n>\n> Minor note, but in my understanding using a uint64 (where we can) is \n> faster for any simple arithmetic we do with the values.\n\nThat's true. So the argument could be that for seconds and milliseconds \nwe want the extra precision while microseconds are precise enough. \nStill, we could also make the seconds and milliseconds conversion code \ninteger only and e.g. return two integers with the value before and \nafter the comma. FWICS, the functions are nowhere used in performance \ncritical code, so it doesn't really make a difference performance-wise.\n\n>\n> +1, and feel free to carry this patch forward - I'll try to make an \n> effort to review my earlier testing issues again, as well as your \n> later improvements to the patch.\nMoved to the current commit fest. Will you become reviewer?\n>\n> Also, FYI, I just posted an alternate idea for speeding up EXPLAIN \n> ANALYZE with timing over in [0], using a sampling-based approach to \n> reduce the timing overhead.\n\nInteresting idea. I'll reply with some thoughts on the corresponding thread.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CALDaNm3kRBGPhndujr9JcjjbDCG3anhj0vW8b9YtbXrBDMSvvw%40mail.gmail.com\n\n-- \nDavid Geier\n(ServiceNow)",
"msg_date": "Tue, 3 Jan 2023 09:38:20 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 14:08, David Geier <geidav.pg@gmail.com> wrote:\n>\n> Hi Lukas,\n>\n> On 1/2/23 20:50, Lukas Fittl wrote:\n> > Thanks for continuing to work on this patch, and my apologies for\n> > silence on the patch.\n>\n> It would be great if you could review it.\n> Please also share your thoughts around exposing the used clock source as\n> GUC and renaming INSTR_TIME_GET_DOUBLE() to _SECS().\n>\n> I rebased again on master because of [1]. Patches attached.\n>\n> >\n> > Its been hard to make time, and especially so because I typically\n> > develop on an ARM-based macOS system where I can't test this directly\n> > - hence my tests with virtualized EC2 instances, where I ran into the\n> > timing oddities.\n> That's good and bad. Bad to do the development and good to test the\n> implementation on more virtualized setups; given that I also encountered\n> \"interesting\" behavior on VMWare (see my previous mails).\n> >\n> > On Mon, Jan 2, 2023 at 5:28 AM David Geier <geidav.pg@gmail.com> wrote:\n> >\n> > The INSTR_TIME_GET_MICROSEC() returns a uint64 while the other\n> > variants\n> > return double. This seems error prone. What about renaming the\n> > function\n> > or also have the function return a double and cast where necessary at\n> > the call site?\n> >\n> >\n> > Minor note, but in my understanding using a uint64 (where we can) is\n> > faster for any simple arithmetic we do with the values.\n>\n> That's true. So the argument could be that for seconds and milliseconds\n> we want the extra precision while microseconds are precise enough.\n> Still, we could also make the seconds and milliseconds conversion code\n> integer only and e.g. return two integers with the value before and\n> after the comma. FWICS, the functions are nowhere used in performance\n> critical code, so it doesn't really make a difference performance-wise.\n>\n> >\n> > +1, and feel free to carry this patch forward - I'll try to make an\n> > effort to review my earlier testing issues again, as well as your\n> > later improvements to the patch.\n> Moved to the current commit fest. Will you become reviewer?\n> >\n> > Also, FYI, I just posted an alternate idea for speeding up EXPLAIN\n> > ANALYZE with timing over in [0], using a sampling-based approach to\n> > reduce the timing overhead.\n>\n> Interesting idea. I'll reply with some thoughts on the corresponding thread.\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CALDaNm3kRBGPhndujr9JcjjbDCG3anhj0vW8b9YtbXrBDMSvvw%40mail.gmail.com\n\nCFBot shows some compilation errors as in [1], please post an updated\nversion for the same:\n09:08:12.525] /usr/bin/ld:\nsrc/bin/pg_test_timing/pg_test_timing.p/pg_test_timing.c.o: warning:\nrelocation against `cycles_to_sec' in read-only section `.text'\n[09:08:12.525] /usr/bin/ld:\nsrc/bin/pg_test_timing/pg_test_timing.p/pg_test_timing.c.o: in\nfunction `pg_clock_gettime_ref_cycles':\n[09:08:12.525] /tmp/cirrus-ci-build/build/../src/include/portability/instr_time.h:119:\nundefined reference to `use_rdtsc'\n[09:08:12.525] /usr/bin/ld:\nsrc/bin/pg_test_timing/pg_test_timing.p/pg_test_timing.c.o: in\nfunction `test_timing':\n[09:08:12.525] /tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:135:\nundefined reference to `pg_clock_gettime_initialize_rdtsc'\n[09:08:12.525] /usr/bin/ld:\n/tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:137:\nundefined reference to `cycles_to_us'\n[09:08:12.525] /usr/bin/ld:\n/tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:146:\nundefined reference to `cycles_to_us'\n[09:08:12.525] /usr/bin/ld:\n/tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:169:\nundefined reference to `cycles_to_us'\n[09:08:12.525] /usr/bin/ld:\n/tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:176:\nundefined reference to `cycles_to_sec'\n[09:08:12.525] /usr/bin/ld: warning: creating DT_TEXTREL in a PIE\n[09:08:12.525] collect2: error: ld returned 1 exit status\n\n[1] - https://cirrus-ci.com/task/5375312565895168\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 15:45:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\n> CFBot shows some compilation errors as in [1], please post an updated\n> version for the same:\n> 09:08:12.525] /usr/bin/ld:\n> src/bin/pg_test_timing/pg_test_timing.p/pg_test_timing.c.o: warning:\n> relocation against `cycles_to_sec' in read-only section `.text'\n> [09:08:12.525] /usr/bin/ld:\n> src/bin/pg_test_timing/pg_test_timing.p/pg_test_timing.c.o: in\n> function `pg_clock_gettime_ref_cycles':\n> [09:08:12.525] /tmp/cirrus-ci-build/build/../src/include/portability/instr_time.h:119:\n> undefined reference to `use_rdtsc'\n> [09:08:12.525] /usr/bin/ld:\n> src/bin/pg_test_timing/pg_test_timing.p/pg_test_timing.c.o: in\n> function `test_timing':\n> [09:08:12.525] /tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:135:\n> undefined reference to `pg_clock_gettime_initialize_rdtsc'\n> [09:08:12.525] /usr/bin/ld:\n> /tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:137:\n> undefined reference to `cycles_to_us'\n> [09:08:12.525] /usr/bin/ld:\n> /tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:146:\n> undefined reference to `cycles_to_us'\n> [09:08:12.525] /usr/bin/ld:\n> /tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:169:\n> undefined reference to `cycles_to_us'\n> [09:08:12.525] /usr/bin/ld:\n> /tmp/cirrus-ci-build/build/../src/bin/pg_test_timing/pg_test_timing.c:176:\n> undefined reference to `cycles_to_sec'\n> [09:08:12.525] /usr/bin/ld: warning: creating DT_TEXTREL in a PIE\n> [09:08:12.525] collect2: error: ld returned 1 exit status\n>\n> [1] - https://cirrus-ci.com/task/5375312565895168\n>\n> Regards,\n> Vignesh\n\nI fixed the compilation error on CFBot.\nI missed adding instr_time.c to the Meson makefile.\nNew patch set attached.\n\n-- \nDavid Geier\n(ServiceNow)",
"msg_date": "Wed, 4 Jan 2023 13:02:05 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-04 13:02:05 +0100, David Geier wrote:\n> From be18633d4735f680c7910fcb4e8ac90c4eada131 Mon Sep 17 00:00:00 2001\n> From: David Geier <geidav.pg@gmail.com>\n> Date: Thu, 17 Nov 2022 10:22:01 +0100\n> Subject: [PATCH 1/3] Change instr_time to just store nanoseconds, that's\n> cheaper.\n\nDoes anybody see a reason to not move forward with this aspect? We do a fair\namount of INSTR_TIME_ACCUM_DIFF() etc, and that gets a good bit cheaper by\njust using nanoseconds. We'd also save memory in BufferUsage (144-122 bytes),\nInstrumentation (16 bytes saved in Instrumentation itself, 32 via\nBufferUsage).\n\nWhile the range of instr_time storing nanoseconds wouldn't be good enough for\na generic timestamp facility (hence using microsecs for Timestamp), the range\nseems plenty for its use of measuring runtime:\n\n(2 ** 63) - 1) / ((10 ** 9) * 60 * 60 * 24 * 365) = ~292 years\n\nOf course, when using CLOCK_REALTIME, this is relative to 1970-01-01, so just\n239 years.\n\nIt could theoretically be a different story, if we stored instr_time's on\ndisk. But we don't, they're ephemeral.\n\n\nThis doesn't buy a whole lot of performance - the bottlenck is the actual\ntimestamp computation. But in a query with not much else going on, it's\nvisible and reproducible. It's, unsurprisingly, a lot easier to see when using\nBUFFERS.\n\nFor both timespec and nanosecond, I measured three server starts, and for each\nstarted server three executions of\npgbench -n -Mprepared -c1 -P5 -T15 -f <(echo \"EXPLAIN (ANALYZE, BUFFERS) SELECT generate_series(1, 10000000) OFFSET 10000000;\")\n\nthe best result is:\ntimespec: 1073.431\nnanosec: 957.532\na ~10% difference\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Jan 2023 11:55:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-04 13:02:05 +0100, David Geier wrote:\n>> Subject: [PATCH 1/3] Change instr_time to just store nanoseconds, that's\n>> cheaper.\n\n> Does anybody see a reason to not move forward with this aspect? We do a fair\n> amount of INSTR_TIME_ACCUM_DIFF() etc, and that gets a good bit cheaper by\n> just using nanoseconds.\n\nCheaper, and perhaps more accurate too? Don't recall if we have any code\npaths where the input timestamps are likely to be better-than-microsecond,\nbut surely that's coming someday.\n\nI'm unsure that we want to deal with rdtsc's vagaries in general, but\nno objection to changing instr_time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Jan 2023 15:25:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-13 15:25:16 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Does anybody see a reason to not move forward with this aspect? We do a fair\n> > amount of INSTR_TIME_ACCUM_DIFF() etc, and that gets a good bit cheaper by\n> > just using nanoseconds.\n>\n> Cheaper, and perhaps more accurate too? Don't recall if we have any code\n> paths where the input timestamps are likely to be better-than-microsecond,\n> but surely that's coming someday.\n\ninstr_time on !WIN32 use struct timespec, so we already should have nanosecond\nprecision available. IOW, we could add a INSTR_TIME_GET_NANOSEC today. Or am I\nmisunderstanding what you mean?\n\n\n> I'm unsure that we want to deal with rdtsc's vagaries in general, but\n> no objection to changing instr_time.\n\nCool.\n\nLooking at the instr_time.h part of the change, I think it should go further,\nand basically do the same thing in the WIN32 path. The only part that needs to\nbe be win32 specific is INSTR_TIME_SET_CURRENT(). That'd reduce duplication a\ngood bit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Jan 2023 12:59:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 17:32, David Geier <geidav.pg@gmail.com> wrote:\n>\n> I fixed the compilation error on CFBot.\n> I missed adding instr_time.c to the Meson makefile.\n> New patch set attached.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nff23b592ad6621563d3128b26860bcb41daf9542 ===\n=== applying patch\n./0002-Use-CPU-reference-cycles-via-RDTSC-to-measure-time-v6.patch\n....\npatching file src/tools/msvc/Mkvcbuild.pm\nHunk #1 FAILED at 135.\n1 out of 1 hunk FAILED -- saving rejects to file src/tools/msvc/Mkvcbuild.pm.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3751.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 14 Jan 2023 12:28:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-13 11:55:47 -0800, Andres Freund wrote:\n> Does anybody see a reason to not move forward with this aspect? We do a fair\n> amount of INSTR_TIME_ACCUM_DIFF() etc, and that gets a good bit cheaper by\n> just using nanoseconds. We'd also save memory in BufferUsage (144-122 bytes),\n> Instrumentation (16 bytes saved in Instrumentation itself, 32 via\n> BufferUsage).\n\nThis actually under-counted the benefits, because we have two BufferUsage and\ntwo WalUsage in Instrumentation.\n\nBefore:\n /* size: 448, cachelines: 7, members: 20 */\n /* sum members: 445, holes: 1, sum holes: 3 */\nAfter\n /* size: 368, cachelines: 6, members: 20 */\n /* sum members: 365, holes: 1, sum holes: 3 */\n\n\nThe difference in the number of instructions in InstrStopNode is astounding:\n1016 instructions with timespec, 96 instructions with nanoseconds. Some of\nthat is the simpler data structure, some because the compiler now can\nauto-vectorize the four INSTR_TIME_ACCUM_DIFF in BufferUsageAccumDiff into\none.\n\nWe probably should convert Instrumentation->firsttuple to a instr_time now as\nwell, no point in having the code for conversion to double in the hot routine,\nthat can easily happen in explain. But that's for a later patch.\n\n\nI suggested downthread that we should convert the win32 implementation to be\nmore similar to the unix-nanoseconds representation. A blind conversion looks\ngood, and lets us share a number of macros.\n\n\nI wonder if we should deprecate INSTR_TIME_IS_ZERO()/INSTR_TIME_SET_ZERO() and\nallow 0 to be used instead. Not needing INSTR_TIME_SET_ZERO() allows variable\ndefinitions to initialize the value, which does avoid some unnecessarily\nawkward code. Alternatively we could introduce INSTR_TIME_ZERO() for that\npurpose?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 15 Jan 2023 18:36:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-02 14:28:20 +0100, David Geier wrote:\n> I also somewhat improved the accuracy of the cycles to milli- and\n> microseconds conversion functions by having two more multipliers with higher\n> precision. For microseconds we could also keep the computation integer-only.\n> I'm wondering what to best do for seconds and milliseconds. I'm currently\n> leaning towards just keeping it as is, because the durations measured and\n> converted are usually long enough that precision shouldn't be a problem.\n\nI'm doubtful this is worth the complexity it incurs. By the time we convert\nout of the instr_time format, the times shouldn't be small enough that the\naccuracy is affected much.\n\nLooking around, most of the existing uses of INSTR_TIME_GET_MICROSEC()\nactually accumulate themselves, and should instead keep things in the\ninstr_time format and convert later. We'd win more accuracy / speed that way.\n\nI don't think the introduction of pg_time_usec_t was a great idea, but oh\nwell.\n\n\n> Additionally, I initialized a few variables of type instr_time which\n> otherwise resulted in warnings due to use of potentially uninitialized\n> variables.\n\nUnless we decide, as I suggested downthread, that we deprecate\nINSTR_TIME_SET_ZERO(), that's unfortunately not the right fix. I've a similar\npatch that adds all the necesarry INSTR_TIME_SET_ZERO() calls.\n\n\n> What about renaming INSTR_TIME_GET_DOUBLE() to INSTR_TIME_GET_SECS() so that\n> it's consistent with the _MILLISEC() and _MICROSEC() variants?\n\n> The INSTR_TIME_GET_MICROSEC() returns a uint64 while the other variants\n> return double. This seems error prone. What about renaming the function or\n> also have the function return a double and cast where necessary at the call\n> site?\n\nI think those should be a separate discussion / patch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Jan 2023 09:37:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nthere's minor bitrot in the Mkvcbuild.pm change, making cfbot unhappy.\n\nAs for the patch, I don't have much comments. I'm wondering if it'd be\nuseful to indicate which timing source was actually used for EXPLAIN\nANALYZE, say something like:\n\n Planning time: 0.197 ms\n Execution time: 0.225 ms\n Timing source: clock_gettime (or tsc)\n\nThere has been a proposal to expose this as a GUC (or perhaps as explain\noption), to allow users to pick what timing source to use. I wouldn't go\nthat far - AFAICS is this is meant to be universally better when\navailable. But knowing which source was used seems useful.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Jan 2023 21:34:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "po 16. 1. 2023 v 21:34 odesílatel Tomas Vondra <\ntomas.vondra@enterprisedb.com> napsal:\n\n> Hi,\n>\n> there's minor bitrot in the Mkvcbuild.pm change, making cfbot unhappy.\n>\n> As for the patch, I don't have much comments. I'm wondering if it'd be\n> useful to indicate which timing source was actually used for EXPLAIN\n> ANALYZE, say something like:\n>\n> Planning time: 0.197 ms\n> Execution time: 0.225 ms\n> Timing source: clock_gettime (or tsc)\n>\n> There has been a proposal to expose this as a GUC (or perhaps as explain\n> option), to allow users to pick what timing source to use. I wouldn't go\n> that far - AFAICS is this is meant to be universally better when\n> available. But knowing which source was used seems useful.\n>\n\n+1\n\nPavel\n\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\npo 16. 1. 2023 v 21:34 odesílatel Tomas Vondra <tomas.vondra@enterprisedb.com> napsal:Hi,\n\nthere's minor bitrot in the Mkvcbuild.pm change, making cfbot unhappy.\n\nAs for the patch, I don't have much comments. I'm wondering if it'd be\nuseful to indicate which timing source was actually used for EXPLAIN\nANALYZE, say something like:\n\n Planning time: 0.197 ms\n Execution time: 0.225 ms\n Timing source: clock_gettime (or tsc)\n\nThere has been a proposal to expose this as a GUC (or perhaps as explain\noption), to allow users to pick what timing source to use. I wouldn't go\nthat far - AFAICS is this is meant to be universally better when\navailable. But knowing which source was used seems useful.+1Pavel\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 16 Jan 2023 21:39:04 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 2:56 PM Andres Freund <andres@anarazel.de> wrote:\n> Does anybody see a reason to not move forward with this aspect? We do a fair\n> amount of INSTR_TIME_ACCUM_DIFF() etc, and that gets a good bit cheaper by\n> just using nanoseconds. We'd also save memory in BufferUsage (144-122 bytes),\n> Instrumentation (16 bytes saved in Instrumentation itself, 32 via\n> BufferUsage).\n\nI read through 0001 and it seems basically fine to me. Comments:\n\n1. pg_clock_gettime_ns() doesn't follow pgindent conventions.\n\n2. I'm not entirely sure that the new .?S_PER_.?S macros are\nworthwhile but maybe they are, and in any case I don't care very much.\n\n3. I've always found 'struct timespec' to be pretty annoying\nnotationally, so I like the fact that this patch would reduce use of\nit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Jan 2023 08:46:12 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 08:46:12 -0500, Robert Haas wrote:\n> On Fri, Jan 13, 2023 at 2:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > Does anybody see a reason to not move forward with this aspect? We do a fair\n> > amount of INSTR_TIME_ACCUM_DIFF() etc, and that gets a good bit cheaper by\n> > just using nanoseconds. We'd also save memory in BufferUsage (144-122 bytes),\n> > Instrumentation (16 bytes saved in Instrumentation itself, 32 via\n> > BufferUsage).\n\nHere's an updated version of the move to representing instr_time as\nnanoseconds. It's now split into a few patches:\n\n0001) Add INSTR_TIME_SET_ZERO() calls where otherwise 0002 causes gcc to\n warn\n\n Alternatively we can decide to deprecate INSTR_TIME_SET_ZERO() and\n just allow to assign 0.\n\n0002) Convert instr_time to uint64\n\n This is the cleaned up version of the prior patch. The main change is\n that it deduplicated a lot of the code between the architectures.\n\n0003) Add INSTR_TIME_SET_SECOND()\n\n This is used in 0004. Just allows setting an instr_time to a time in\n seconds, allowing for a cheaper loop exit condition in 0004.\n\n0004) report nanoseconds in pg_test_timing\n\n\nI also couldn't help and hacked a bit on the rdtsc pieces. I did figure out\nhow to do the cycles->nanosecond conversion with integer shift and multiply in\nthe common case, which does show a noticable speedup. But that's for another\nday.\n\nI fought a bit with myself about whether to send those patches in this thread,\nbecause it'll take over the CF entry. But decided that it's ok, given that\nDavid's patches should be rebased over these anyway?\n\n\n> I read through 0001 and it seems basically fine to me. Comments:\n>\n> 1. pg_clock_gettime_ns() doesn't follow pgindent conventions.\n\nFixed.\n\n\n> 2. I'm not entirely sure that the new .?S_PER_.?S macros are\n> worthwhile but maybe they are, and in any case I don't care very much.\n\nThere's now fewer. But those I'd like to keep. I just end up counting digits\nmanually way too many times.\n\n\n> 3. I've always found 'struct timespec' to be pretty annoying\n> notationally, so I like the fact that this patch would reduce use of\n> it.\n\nSame.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 17 Jan 2023 08:47:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Here's an updated version of the move to representing instr_time as\n> nanoseconds. It's now split into a few patches:\n\nI took a quick look through this.\n\n> 0001) Add INSTR_TIME_SET_ZERO() calls where otherwise 0002 causes gcc to\n> warn\n> Alternatively we can decide to deprecate INSTR_TIME_SET_ZERO() and\n> just allow to assign 0.\n\nI think it's probably wise to keep the macro. If we ever rethink this\nagain, we'll be glad we kept it. Similarly, IS_ZERO is a good idea\neven if it would work with just compare-to-zero. I'm almost tempted\nto suggest you define instr_time as a struct with a uint64 field,\njust to help keep us honest about that.\n\n> 0003) Add INSTR_TIME_SET_SECOND()\n> This is used in 0004. Just allows setting an instr_time to a time in\n> seconds, allowing for a cheaper loop exit condition in 0004.\n\nCode and comments are inconsistent about whether it's SET_SECOND or\nSET_SECONDS. I think I prefer the latter, but don't care that much.\n\n> 0004) report nanoseconds in pg_test_timing\n\nDidn't examine 0004 in any detail, but the others look good to go\nother than these nits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Jan 2023 12:26:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 12:26:57 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Here's an updated version of the move to representing instr_time as\n> > nanoseconds. It's now split into a few patches:\n> \n> I took a quick look through this.\n\nThanks!\n\n\n> > 0001) Add INSTR_TIME_SET_ZERO() calls where otherwise 0002 causes gcc to\n> > warn\n> > Alternatively we can decide to deprecate INSTR_TIME_SET_ZERO() and\n> > just allow to assign 0.\n> \n> I think it's probably wise to keep the macro. If we ever rethink this\n> again, we'll be glad we kept it. Similarly, IS_ZERO is a good idea\n> even if it would work with just compare-to-zero.\n\nPerhaps an INSTR_TIME_ZERO() that could be assigned in variable definitions\ncould give us the best of both worlds?\n\n\n> I'm almost tempted to suggest you define instr_time as a struct with a\n> uint64 field, just to help keep us honest about that.\n\nI can see that making sense. Unless somebody pipes up with opposition to that\nplan soon, I'll see how it goes.\n\n\n> > 0003) Add INSTR_TIME_SET_SECOND()\n> > This is used in 0004. Just allows setting an instr_time to a time in\n> > seconds, allowing for a cheaper loop exit condition in 0004.\n> \n> Code and comments are inconsistent about whether it's SET_SECOND or\n> SET_SECONDS. I think I prefer the latter, but don't care that much.\n\nThat's probably because I couldn't decide... So I'll go with your preference.\n\n\n> > 0004) report nanoseconds in pg_test_timing\n> \n> Didn't examine 0004 in any detail, but the others look good to go\n> other than these nits.\n\nThanks for looking!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Jan 2023 10:50:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On 1/16/23 21:39, Pavel Stehule wrote:\n>\n> po 16. 1. 2023 v 21:34 odesílatel Tomas Vondra \n> <tomas.vondra@enterprisedb.com> napsal:\n>\n> Hi,\n>\n> there's minor bitrot in the Mkvcbuild.pm change, making cfbot unhappy.\n>\n> As for the patch, I don't have much comments. I'm wondering if it'd be\n> useful to indicate which timing source was actually used for EXPLAIN\n> ANALYZE, say something like:\n>\n> Planning time: 0.197 ms\n> Execution time: 0.225 ms\n> Timing source: clock_gettime (or tsc)\n>\n> There has been a proposal to expose this as a GUC (or perhaps as\n> explain\n> option), to allow users to pick what timing source to use. I\n> wouldn't go\n> that far - AFAICS is this is meant to be universally better when\n> available. But knowing which source was used seems useful.\n>\n>\n> +1\n\nThanks for looking at the patch.\n\nI'll fix the merge conflict.\n\nI like the idea of exposing the timing source in the EXPLAIN ANALYZE output.\nIt's a good tradeoff between inspectability and effort, given that RDTSC \nshould always be better to use.\nIf there are no objections I go this way.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:52:05 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On 1/16/23 18:37, Andres Freund wrote:\n> Hi,\n>\n> On 2023-01-02 14:28:20 +0100, David Geier wrote:\n>\n> I'm doubtful this is worth the complexity it incurs. By the time we convert\n> out of the instr_time format, the times shouldn't be small enough that the\n> accuracy is affected much.\nI don't feel strong about it and you have a point that we most likely \nonly convert ones we've accumulated a fair amount of cycles.\n> Looking around, most of the existing uses of INSTR_TIME_GET_MICROSEC()\n> actually accumulate themselves, and should instead keep things in the\n> instr_time format and convert later. We'd win more accuracy / speed that way.\n>\n> I don't think the introduction of pg_time_usec_t was a great idea, but oh\n> well.\nFully agreed. Why not replacing pg_time_usec_t with instr_time in a \nseparate patch? I haven't looked carefully enough if all occurrences \ncould sanely replaced but at least the ones that accumulate time seem \ngood starting points.\n>> Additionally, I initialized a few variables of type instr_time which\n>> otherwise resulted in warnings due to use of potentially uninitialized\n>> variables.\n> Unless we decide, as I suggested downthread, that we deprecate\n> INSTR_TIME_SET_ZERO(), that's unfortunately not the right fix. I've a similar\n> patch that adds all the necesarry INSTR_TIME_SET_ZERO() calls.\nI don't feel strong about it, but like Tom tend towards keeping the \ninitialization macro.\nThanks that you have improved on the first patch and fixed these issues \nin a better way.\n>> What about renaming INSTR_TIME_GET_DOUBLE() to INSTR_TIME_GET_SECS() so that\n>> it's consistent with the _MILLISEC() and _MICROSEC() variants?\n>> The INSTR_TIME_GET_MICROSEC() returns a uint64 while the other variants\n>> return double. This seems error prone. What about renaming the function or\n>> also have the function return a double and cast where necessary at the call\n>> site?\n> I think those should be a separate discussion / patch.\n\nOK. I'll propose follow-on patches ones we're done with the ones at hand.\n\nI'll then rebase the RDTSC patches on your patch set.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 14:02:48 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\n@Andres: will you take care of these changes and provide me with an \nupdated patch set so I can rebase the RDTSC changes?\nOtherwise, I can also apply Tom suggestions to your patch set and send \nout the complete patch set.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 14:05:35 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi Andres,\n\n> I also couldn't help and hacked a bit on the rdtsc pieces. I did figure out\n> how to do the cycles->nanosecond conversion with integer shift and multiply in\n> the common case, which does show a noticable speedup. But that's for another\n> day.\nI also have code for that here. I decided against integrating it because \nwe don't convert frequently enough to make it matter. Or am I missing \nsomething?\n> I fought a bit with myself about whether to send those patches in this thread,\n> because it'll take over the CF entry. But decided that it's ok, given that\n> David's patches should be rebased over these anyway?\nThat's alright.\nThough, I would hope we attempt to bring your patch set as well as the \nRDTSC patch set in.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Thu, 19 Jan 2023 11:47:49 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On 1/18/23 13:52, David Geier wrote:\n> On 1/16/23 21:39, Pavel Stehule wrote:\n>>\n>> po 16. 1. 2023 v 21:34 odesílatel Tomas Vondra \n>> <tomas.vondra@enterprisedb.com> napsal:\n>>\n>> Hi,\n>>\n>> there's minor bitrot in the Mkvcbuild.pm change, making cfbot \n>> unhappy.\n>>\n>> As for the patch, I don't have much comments. I'm wondering if \n>> it'd be\n>> useful to indicate which timing source was actually used for EXPLAIN\n>> ANALYZE, say something like:\n>>\n>> Planning time: 0.197 ms\n>> Execution time: 0.225 ms\n>> Timing source: clock_gettime (or tsc)\n>>\n>> +1\n>\n> I like the idea of exposing the timing source in the EXPLAIN ANALYZE \n> output.\n> It's a good tradeoff between inspectability and effort, given that \n> RDTSC should always be better to use.\n> If there are no objections I go this way.\nThinking about this a little more made me realize that this will cause \ndifferent pg_regress output depending on the platform. So if we go this \nroute we would at least need an option for EXPLAIN ANALYZE to disable \nit. Or rather have it disabled by default and allow for enabling it. \nThoughts?\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Fri, 20 Jan 2023 07:43:00 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "\n\nOn 1/20/23 07:43, David Geier wrote:\n> On 1/18/23 13:52, David Geier wrote:\n>> On 1/16/23 21:39, Pavel Stehule wrote:\n>>>\n>>> po 16. 1. 2023 v 21:34 odesílatel Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> napsal:\n>>>\n>>> Hi,\n>>>\n>>> there's minor bitrot in the Mkvcbuild.pm change, making cfbot\n>>> unhappy.\n>>>\n>>> As for the patch, I don't have much comments. I'm wondering if\n>>> it'd be\n>>> useful to indicate which timing source was actually used for EXPLAIN\n>>> ANALYZE, say something like:\n>>>\n>>> Planning time: 0.197 ms\n>>> Execution time: 0.225 ms\n>>> Timing source: clock_gettime (or tsc)\n>>>\n>>> +1\n>>\n>> I like the idea of exposing the timing source in the EXPLAIN ANALYZE\n>> output.\n>> It's a good tradeoff between inspectability and effort, given that\n>> RDTSC should always be better to use.\n>> If there are no objections I go this way.\n> Thinking about this a little more made me realize that this will cause\n> different pg_regress output depending on the platform. So if we go this\n> route we would at least need an option for EXPLAIN ANALYZE to disable\n> it. Or rather have it disabled by default and allow for enabling it.\n> Thoughts?\n> \n\nWhat about only showing it for VERBOSE mode? I don't think there are\nvery many tests doing EXPLAIN (ANALYZE, VERBOSE) - a quick grep found\none such place in partition_prune.sql.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Jan 2023 12:13:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 10:50:53 -0800, Andres Freund wrote:\n> On 2023-01-17 12:26:57 -0500, Tom Lane wrote:\n> > > 0001) Add INSTR_TIME_SET_ZERO() calls where otherwise 0002 causes gcc to\n> > > warn\n> > > Alternatively we can decide to deprecate INSTR_TIME_SET_ZERO() and\n> > > just allow to assign 0.\n> >\n> > I think it's probably wise to keep the macro. If we ever rethink this\n> > again, we'll be glad we kept it. Similarly, IS_ZERO is a good idea\n> > even if it would work with just compare-to-zero.\n>\n> Perhaps an INSTR_TIME_ZERO() that could be assigned in variable definitions\n> could give us the best of both worlds?\n\nI tried that in the attached 0005. I found that it reads better if I also add\nINSTR_TIME_CURRENT(). If we decide to go for this, I'd roll it into 0001\ninstead, but I wanted to get agreement on it first.\n\nComments?\n\n\n> > I'm almost tempted to suggest you define instr_time as a struct with a\n> > uint64 field, just to help keep us honest about that.\n>\n> I can see that making sense. Unless somebody pipes up with opposition to that\n> plan soon, I'll see how it goes.\n\nDone in the attached. I think it looks good. Actually found a type confusion\nbuglet in 0004, so the type safety benefit is noticable.\n\nIt does require a new INSTR_TIME_IS_LT() for the loop exit condition in 0004,\nbut that seems fine.\n\n\nBesides cosmetic stuff I also added back the cast to double in window's\nINSTR_TIME_GET_NANOSEC() - I think there's an overflow danger without it.\n\nWe should make this faster by pre-computing\n (double) NS_PER_S / GetTimerFrequency()\nonce, as that'd avoid doing the the slow division on every conversion. But\nthat's an old issue and thus better tackled separately.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 20 Jan 2023 16:40:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n>> Perhaps an INSTR_TIME_ZERO() that could be assigned in variable definitions\n>> could give us the best of both worlds?\n\n> I tried that in the attached 0005. I found that it reads better if I also add\n> INSTR_TIME_CURRENT(). If we decide to go for this, I'd roll it into 0001\n> instead, but I wanted to get agreement on it first.\n\n-1 from here. This forecloses the possibility that it's best to use more\nthan one assignment to initialize the value, and the code doesn't read\nany better than it did before.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Jan 2023 22:27:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-20 22:27:07 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> Perhaps an INSTR_TIME_ZERO() that could be assigned in variable definitions\n> >> could give us the best of both worlds?\n> \n> > I tried that in the attached 0005. I found that it reads better if I also add\n> > INSTR_TIME_CURRENT(). If we decide to go for this, I'd roll it into 0001\n> > instead, but I wanted to get agreement on it first.\n> \n> -1 from here. This forecloses the possibility that it's best to use more\n> than one assignment to initialize the value, and the code doesn't read\n> any better than it did before.\n\nI think it does read a bit better, but it's a pretty small improvement. So\nI'll leave this aspect be for now.\n\nThanks for checking.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Jan 2023 19:54:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-19 11:47:49 +0100, David Geier wrote:\n> > I also couldn't help and hacked a bit on the rdtsc pieces. I did figure out\n> > how to do the cycles->nanosecond conversion with integer shift and multiply in\n> > the common case, which does show a noticable speedup. But that's for another\n> > day.\n> I also have code for that here. I decided against integrating it because we\n> don't convert frequently enough to make it matter. Or am I missing\n> something?\n\nWe do currently do the conversion quite frequently. Admittedly I was\npartially motivated by trying to get the per-loop overhead in pg_test_timing\ndown ;)\n\nBut I think it's a real issue. Places where we do, but shouldn't, convert:\n\n- ExecReScan() - quite painful, we can end up with a lot of those\n- InstrStopNode() - adds a good bit of overhead to simple\n- PendingWalStats.wal_write_time - this is particularly bad because it happens\n within very contended code\n- calls to pgstat_count_buffer_read_time(), pgstat_count_buffer_write_time() -\n they can be very frequent\n- pgbench.c, as we already discussed\n- pg_stat_statements.c\n- ...\n\nThese all will get a bit slower when moving to a \"variable\" frequency.\n\n\nWhat was your approach for avoiding the costly operation? I ended up with a\ninteger multiplication + shift approximation for the floating point\nmultiplication (which in turn uses the inverse of the division by the\nfrequency). To allow for sufficient precision while also avoiding overflows, I\nhad to make that branch conditional, with a slow path for large numbers of\nnanoseconds.\n\n\n> > I fought a bit with myself about whether to send those patches in this thread,\n> > because it'll take over the CF entry. But decided that it's ok, given that\n> > David's patches should be rebased over these anyway?\n> That's alright.\n> Though, I would hope we attempt to bring your patch set as well as the RDTSC\n> patch set in.\n\nI think it'd be great - but I'm not sure we're there yet, reliability and\ncode-complexity wise.\n\nI think it might be worth makign the rdts aspect somewhat\nmeasurable. E.g. allowing pg_test_timing to use both at the same time, and\nhave it compare elapsed time with both sources of counters.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Jan 2023 20:12:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-20 07:43:00 +0100, David Geier wrote:\n> On 1/18/23 13:52, David Geier wrote:\n> > On 1/16/23 21:39, Pavel Stehule wrote:\n> > > \n> > > po 16. 1. 2023 v�21:34 odes�latel Tomas Vondra\n> > > <tomas.vondra@enterprisedb.com> napsal:\n> > > \n> > > ��� Hi,\n> > > \n> > > ��� there's minor bitrot in the Mkvcbuild.pm change, making cfbot\n> > > unhappy.\n> > > \n> > > ��� As for the patch, I don't have much comments. I'm wondering if\n> > > it'd be\n> > > ��� useful to indicate which timing source was actually used for EXPLAIN\n> > > ��� ANALYZE, say something like:\n> > > \n> > > ��� �Planning time: 0.197 ms\n> > > ��� �Execution time: 0.225 ms\n> > > ��� �Timing source: clock_gettime (or tsc)\n> > > \n> > > +1\n> > \n> > I like the idea of exposing the timing source in the EXPLAIN ANALYZE\n> > output.\n> > It's a good tradeoff between inspectability and effort, given that RDTSC\n> > should always be better to use.\n> > If there are no objections I go this way.\n> Thinking about this a little more made me realize that this will cause\n> different pg_regress output depending on the platform. So if we go this\n> route we would at least need an option for EXPLAIN ANALYZE to disable it. Or\n> rather have it disabled by default and allow for enabling it. Thoughts?\n\nThe elapsed time is already inherently unstable, so we shouldn't have any test\noutput showing the time.\n\nBut I doubt showing it in every explain is a good idea - we use instr_time in\nplenty of other places. Why show it in explain, but not in all those other\nplaces?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Jan 2023 20:14:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 14:05:35 +0100, David Geier wrote:\n> @Andres: will you take care of these changes and provide me with an updated\n> patch set so I can rebase the RDTSC changes?\n> Otherwise, I can also apply Tom suggestions to your patch set and send out\n> the complete patch set.\n\nI'm planning to push most of my changes soon, had hoped to get to it a bit\nsooner, but ...\n\nIf you have time to look at the pg_test_timing part, it'd be\nappreciated. That's a it larger, and nobody looked at it yet. So I'm a bit\nhesitant to push it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Jan 2023 20:16:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 14:02:48 +0100, David Geier wrote:\n> On 1/16/23 18:37, Andres Freund wrote:\n> > I'm doubtful this is worth the complexity it incurs. By the time we convert\n> > out of the instr_time format, the times shouldn't be small enough that the\n> > accuracy is affected much.\n>\n> I don't feel strong about it and you have a point that we most likely only\n> convert ones we've accumulated a fair amount of cycles.\n\nI think we can avoid the issue another way. The inaccuracy comes from the\ncycles_to_sec ending up very small, right? Right now your patch has (and\nprobably my old version similarly had):\n\ncycles_to_sec = 1.0 / (tsc_freq * 1000);\n\nI think it's better if we have one multiplier to convert cycles to nanoseconds\n- that'll be a double comparatively close to 1. We can use that to implement\nINSTR_TIME_GET_NANOSECONDS(). The conversion to microseconds then is just a\ndivision by 1000 (which most compilers convert into a multiplication/shift\ncombo), and the conversions to milliseconds and seconds will be similar.\n\nBecause we'll never \"wrongly\" go into the \"huge number\" or \"very small number\"\nranges, that should provide sufficient precision? We'll of course still end up\nwith a very small number when converting a few nanoseconds to seconds, but\nthat's ok because it's the precision being asked for, instead of loosing\nprecision in some intermediate representation.\n\n\n> > Looking around, most of the existing uses of INSTR_TIME_GET_MICROSEC()\n> > actually accumulate themselves, and should instead keep things in the\n> > instr_time format and convert later. We'd win more accuracy / speed that way.\n> > \n> > I don't think the introduction of pg_time_usec_t was a great idea, but oh\n> > well.\n> Fully agreed. Why not replacing pg_time_usec_t with instr_time in a separate\n> patch?\n\npgbench used to use instr_time, but it was replaced by somebody thinking the\nAPI is too cumbersome. Which I can't quite deny, even though I think the\nspecific change isn't great.\n\nBut yes, this should definitely be a separate patch.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Jan 2023 20:29:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 04:40:32PM -0800, Andres Freund wrote:\n> From 5a458d4584961dedd3f80a07d8faea66e57c5d94 Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Mon, 16 Jan 2023 11:19:11 -0800\n> Subject: [PATCH v8 4/5] wip: report nanoseconds in pg_test_timing\n\n> <para>\n> - The i7-860 system measured runs the count query in 9.8 ms while\n> - the <command>EXPLAIN ANALYZE</command> version takes 16.6 ms, each\n> - processing just over 100,000 rows. That 6.8 ms difference means the timing\n> - overhead per row is 68 ns, about twice what pg_test_timing estimated it\n> - would be. Even that relatively small amount of overhead is making the fully\n> - timed count statement take almost 70% longer. On more substantial queries,\n> - the timing overhead would be less problematic.\n> + The i9-9880H system measured shows an execution time of 4.116 ms for the\n> + <literal>TIMING OFF</literal> query, and 6.965 ms for the\n> + <literal>TIMING ON</literal>, each processing 100,000 rows.\n> +\n> + That 2.849 ms difference means the timing overhead per row is 28 ns. As\n> + <literal>TIMING ON</literal> measures timestamps twice per row returned by\n> + an executor node, the overhead is very close to what pg_test_timing\n> + estimated it would be.\n> +\n> + more than what pg_test_timing estimated it would be. Even that relatively\n> + small amount of overhead is making the fully timed count statement take\n> + about 60% longer. On more substantial queries, the timing overhead would\n> + be less problematic.\n\nI guess you intend to merge these two paragraphs ?\n\n\n",
"msg_date": "Fri, 20 Jan 2023 22:50:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On 2023-01-20 22:50:37 -0600, Justin Pryzby wrote:\n> On Fri, Jan 20, 2023 at 04:40:32PM -0800, Andres Freund wrote:\n> > From 5a458d4584961dedd3f80a07d8faea66e57c5d94 Mon Sep 17 00:00:00 2001\n> > From: Andres Freund <andres@anarazel.de>\n> > Date: Mon, 16 Jan 2023 11:19:11 -0800\n> > Subject: [PATCH v8 4/5] wip: report nanoseconds in pg_test_timing\n> \n> > <para>\n> > - The i7-860 system measured runs the count query in 9.8 ms while\n> > - the <command>EXPLAIN ANALYZE</command> version takes 16.6 ms, each\n> > - processing just over 100,000 rows. That 6.8 ms difference means the timing\n> > - overhead per row is 68 ns, about twice what pg_test_timing estimated it\n> > - would be. Even that relatively small amount of overhead is making the fully\n> > - timed count statement take almost 70% longer. On more substantial queries,\n> > - the timing overhead would be less problematic.\n> > + The i9-9880H system measured shows an execution time of 4.116 ms for the\n> > + <literal>TIMING OFF</literal> query, and 6.965 ms for the\n> > + <literal>TIMING ON</literal>, each processing 100,000 rows.\n> > +\n> > + That 2.849 ms difference means the timing overhead per row is 28 ns. As\n> > + <literal>TIMING ON</literal> measures timestamps twice per row returned by\n> > + an executor node, the overhead is very close to what pg_test_timing\n> > + estimated it would be.\n> > +\n> > + more than what pg_test_timing estimated it would be. Even that relatively\n> > + small amount of overhead is making the fully timed count statement take\n> > + about 60% longer. On more substantial queries, the timing overhead would\n> > + be less problematic.\n> \n> I guess you intend to merge these two paragraphs ?\n\nOops. I was intending to drop the last paragraph.\n\nLooking at the docs again I noticed that I needed to rephrase the 'acpi_pm'\nsection further, as I'd left the \"a small multiple of what's measured directly\nby this utility\" language in there.\n\nDo the changes otherwise make sense?\n\nThe \"small multiple\" stuff was just due to a) comparing \"raw statement\" with\nexplain analyze b) not accounting for two timestamps being taken per row.\n\nI think it makes sense to remove the \"jiffies\" section - the output shown is\nway outdated. And I don't think the jiffies time counter is one something\nstill sees in the wild, outside of bringing up a new cpu architecture or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Jan 2023 21:14:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-20 20:16:13 -0800, Andres Freund wrote:\n> On 2023-01-18 14:05:35 +0100, David Geier wrote:\n> > @Andres: will you take care of these changes and provide me with an updated\n> > patch set so I can rebase the RDTSC changes?\n> > Otherwise, I can also apply Tom suggestions to your patch set and send out\n> > the complete patch set.\n> \n> I'm planning to push most of my changes soon, had hoped to get to it a bit\n> sooner, but ...\n\nI pushed the int64-ification commits.\n\n\n> If you have time to look at the pg_test_timing part, it'd be\n> appreciated. That's a it larger, and nobody looked at it yet. So I'm a bit\n> hesitant to push it.\n\nI haven't yet pushed the pg_test_timing (nor it's small prerequisite)\npatch.\n\nThanks to Justin I've polished the pg_test_timing docs some.\n\n\nI've attached those two patches. Feel free to include them in your series if\nyou want, then the CF entry (and thus cfbot) makes sense again...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 20 Jan 2023 21:31:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-20 21:31:57 -0800, Andres Freund wrote:\n> On 2023-01-20 20:16:13 -0800, Andres Freund wrote:\n> > I'm planning to push most of my changes soon, had hoped to get to it a bit\n> > sooner, but ...\n>\n> I pushed the int64-ification commits.\n\nThere's an odd compilation failure on AIX.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2023-01-21%2007%3A01%3A42\n\n/opt/IBM/xlc/16.1.0/bin/xlc_r -D_LARGE_FILES=1 -DRANDOMIZE_ALLOCATED_MEMORY -qnoansialias -g -O2 -qmaxmem=33554432 -qsuppress=1500-010:1506-995 -qsuppress=1506-010:1506-416:1506-450:1506-480:1506-481:1506-492:1506-944:1506-1264 -qinfo=all:nocnd:noeff:noext:nogot:noini:noord:nopar:noppc:norea:nouni:nouse -qinfo=nounset -qvisibility=hidden -I. -I. -I/opt/freeware/include/python3.5m -I../../../src/include -I/home/nm/sw/nopath/icu58.3-64/include -I/home/nm/sw/nopath/libxml2-64/include/libxml2 -I/home/nm/sw/nopath/uuid-64/include -I/home/nm/sw/nopath/openldap-64/include -I/home/nm/sw/nopath/icu58.3-64/include -I/home/nm/sw/nopath/libxml2-64/include -c -o plpy_cursorobject.o plpy_cursorobject.c\n\"../../../src/include/portability/instr_time.h\", line 116.9: 1506-304 (I) No function prototype given for \"clock_gettime\".\n\"../../../src/include/portability/instr_time.h\", line 116.23: 1506-045 (S) Undeclared identifier CLOCK_REALTIME.\n<builtin>: recipe for target 'plpy_cursorobject.o' failed\n\nbut files including instr_time.h *do* build successfully, e.g. instrument.c:\n\n/opt/IBM/xlc/16.1.0/bin/xlc_r -D_LARGE_FILES=1 -DRANDOMIZE_ALLOCATED_MEMORY -qnoansialias -g -O2 -qmaxmem=33554432 -qsuppress=1500-010:1506-995 -qsuppress=1506-010:1506-416:1506-450:1506-480:1506-481:1506-492:1506-944:1506-1264 -qinfo=all:nocnd:noeff:noext:nogot:noini:noord:nopar:noppc:norea:nouni:nouse -qinfo=nounset -I../../../src/include -I/home/nm/sw/nopath/icu58.3-64/include -I/home/nm/sw/nopath/libxml2-64/include/libxml2 -I/home/nm/sw/nopath/uuid-64/include -I/home/nm/sw/nopath/openldap-64/include -I/home/nm/sw/nopath/icu58.3-64/include -I/home/nm/sw/nopath/libxml2-64/include -c -o instrument.o instrument.c\n\n\nBefore the change the clock_gettime() call was in a macro and thus could be\nreferenced even without a prior declaration, as long as places using\nINSTR_TIME_SET_CURRENT() had all the necessary includes and defines.\n\n\nArgh:\n\nThere's nice bit in plpython.h:\n\n/*\n * Include order should be: postgres.h, other postgres headers, plpython.h,\n * other plpython headers. (In practice, other plpython headers will also\n * include this file, so that they can compile standalone.)\n */\n#ifndef POSTGRES_H\n#error postgres.h must be included before plpython.h\n#endif\n\n/*\n * Undefine some things that get (re)defined in the Python headers. They aren't\n * used by the PL/Python code, and all PostgreSQL headers should be included\n * earlier, so this should be pretty safe.\n */\n#undef _POSIX_C_SOURCE\n#undef _XOPEN_SOURCE\n\n\nthe relevant stuff in time.h is indeed guarded by\n#if _XOPEN_SOURCE>=500\n\n\nI don't think the plpython actually code follows the rule about including all\npostgres headers earlier.\n\nplpy_typeio.h:\n\n#include \"access/htup.h\"\n#include \"fmgr.h\"\n#include \"plpython.h\"\n#include \"utils/typcache.h\"\n\nplpy_curserobject.c:\n\n#include \"access/xact.h\"\n#include \"catalog/pg_type.h\"\n#include \"mb/pg_wchar.h\"\n#include \"plpy_cursorobject.h\"\n#include \"plpy_elog.h\"\n#include \"plpy_main.h\"\n#include \"plpy_planobject.h\"\n#include \"plpy_procedure.h\"\n#include \"plpy_resultobject.h\"\n#include \"plpy_spi.h\"\n#include \"plpython.h\"\n#include \"utils/memutils.h\"\n\n\nIt strikes me as a uh, not good idea to undefine _POSIX_C_SOURCE,\n_XOPEN_SOURCE.\n\nThe include order aspect was perhaps feasible when there just was plpython.c,\nbut with the split into many different C files and many headers, it seems hard\nto maintain. There's a lot of violations afaics.\n\nThe undefines were added in a11cf433413, the split in 147c2482542.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 21 Jan 2023 11:03:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 1/21/23 05:14, Andres Freund wrote:\n> The elapsed time is already inherently unstable, so we shouldn't have any test\n> output showing the time.\n>\n> But I doubt showing it in every explain is a good idea - we use instr_time in\n> plenty of other places. Why show it in explain, but not in all those other\n> places?\n\nYeah. I thought it would only be an issue if we showed it \nunconditionally in EXPLAIN ANALYZE. If we only show it with TIMING ON, \nwe're likely fine with pretty much all regression tests.\n\nBut given the different opinions, I'll leave it out in the new patch set \nfor the moment being.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Mon, 23 Jan 2023 18:23:17 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-23 18:23:17 +0100, David Geier wrote:\n> On 1/21/23 05:14, Andres Freund wrote:\n> > The elapsed time is already inherently unstable, so we shouldn't have any test\n> > output showing the time.\n> > \n> > But I doubt showing it in every explain is a good idea - we use instr_time in\n> > plenty of other places. Why show it in explain, but not in all those other\n> > places?\n> \n> Yeah. I thought it would only be an issue if we showed it unconditionally in\n> EXPLAIN ANALYZE. If we only show it with TIMING ON, we're likely fine with\n> pretty much all regression tests.\n\nIf we add it, it probably shouldn't depend on TIMING, but on\nSUMMARY. Regression test queries showing EXPLAIN ANALYZE output all do\nsomething like\n EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF)\n\nthe SUMMARY OFF gets rid of the \"top-level\" \"Planning Time\" and \"Execution\nTime\", whereas the TIMING OFF gets rid of the per-node timing. Those are\nseparate options because per-node timing is problematic performance-wise\n(right now), but whole-query timing rarely is.\n\n\n> But given the different opinions, I'll leave it out in the new patch set for\n> the moment being.\n\nMakes sense.\n\n\nAnother, independent, thing worth thinking about: I think we might want to\nexpose both rdtsc and rdtscp. For something like\nInstrStartNode()/InstrStopNode(), avoiding the \"one-way barrier\" of rdtscp is\nquite important to avoid changing the query performance. But for measuring\nwhole-query time, we likely want to measure the actual time.\n\nIt probably won't matter hugely for the whole query time - the out of order\nwindow of modern CPUs is large, but not *that* large - but I don't think we\ncan generally assume that.\n\nI'm thinking of something like INSTR_TIME_SET_CURRENT() and\nINSTR_TIME_SET_CURRENT_FAST() or _NOBARRIER().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 23 Jan 2023 09:41:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 1/21/23 05:12, Andres Freund wrote:\n> We do currently do the conversion quite frequently. Admittedly I was\n> partially motivated by trying to get the per-loop overhead in pg_test_timing\n> down ;)\n>\n> But I think it's a real issue. Places where we do, but shouldn't, convert:\n>\n> - ExecReScan() - quite painful, we can end up with a lot of those\n> - InstrStopNode() - adds a good bit of overhead to simple\nInstrStopNode() doesn't convert in the general case but only for the \nfirst tuple or when async. So it goes somewhat hand in hand with \nExecReScan().\n> - PendingWalStats.wal_write_time - this is particularly bad because it happens\n> within very contended code\n> - calls to pgstat_count_buffer_read_time(), pgstat_count_buffer_write_time() -\n> they can be very frequent\n> - pgbench.c, as we already discussed\n> - pg_stat_statements.c\n> - ...\n>\n> These all will get a bit slower when moving to a \"variable\" frequency.\nI wonder if we will be able to measure any of them easily. But given \nthat it's many more places than I had realized and given that the \noptimized code is not too involved, let's give it a try.\n> What was your approach for avoiding the costly operation? I ended up with a\n> integer multiplication + shift approximation for the floating point\n> multiplication (which in turn uses the inverse of the division by the\n> frequency). To allow for sufficient precision while also avoiding overflows, I\n> had to make that branch conditional, with a slow path for large numbers of\n> nanoseconds.\n\nIt seems like we ended up with the same. I do:\n\nsec = ticks / frequency_hz\nns = ticks / frequency_hz * 1,000,000,000\nns = ticks * (1,000,000,000 / frequency_hz)\nns = ticks * (1,000,000 / frequency_khz) <-- now in kilohertz\n\nNow, the constant scaling factor in parentheses is typically a floating \npoint number. For example for a frequency of 2.5 GHz it would be 2.5. To \nwork around that we can do something like:\n\nns = ticks * (1,000,000 * scaler / frequency_khz) / scaler\n\nWhere scaler is a power-of-2, big enough to maintain enough precision \nwhile allowing for a shift to implement the division.\n\nThe additional multiplication with scaler makes that the maximum range \ngo down, because we must ensure we never overflow. I'm wondering if we \ncannot pick scaler in such a way that remaining range of cycles is large \nenough for our use case and we can therefore live without bothering for \nthe overflow case. What would be \"enough\"? 1 year? 10 years? ...\n\nOtherwise, we indeed need code that cares for the potential overflow. My \nhunch is that it can be done branchless, but it for sure adds dependent \ninstructions. Maybe in that case a branch is better that almost \ncertainly will never be taken?\n\nI'll include the code in the new patch set which I'll latest submit \ntomorrow.\n\n> I think it'd be great - but I'm not sure we're there yet, reliability and\n> code-complexity wise.\nThanks to your commits, the diff of the new patch set will be already \nmuch smaller and easier to review. What's your biggest concern in terms \nof reliability?\n> I think it might be worth makign the rdts aspect somewhat\n> measurable. E.g. allowing pg_test_timing to use both at the same time, and\n> have it compare elapsed time with both sources of counters.\nI haven't yet looked into pg_test_timing. I'll do that while including \nyour patches into the new patch set.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Mon, 23 Jan 2023 18:49:37 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 1/21/23 06:31, Andres Freund wrote:\n> I pushed the int64-ification commits.\n\nGreat. I started rebasing.\n\nOne thing I was wondering about: why did you chose to use a signed \ninstead of an unsigned 64-bit integer for the ticks?\n>> If you have time to look at the pg_test_timing part, it'd be\n>> appreciated. That's a it larger, and nobody looked at it yet. So I'm a bit\n>> hesitant to push it.\n> I haven't yet pushed the pg_test_timing (nor it's small prerequisite)\n> patch.\n>\n> I've attached those two patches. Feel free to include them in your series if\n> you want, then the CF entry (and thus cfbot) makes sense again...\nI'll include them in my new patch set and also have a careful look at them.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Mon, 23 Jan 2023 18:52:44 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-23 18:49:37 +0100, David Geier wrote:\n> On 1/21/23 05:12, Andres Freund wrote:\n> > We do currently do the conversion quite frequently. Admittedly I was\n> > partially motivated by trying to get the per-loop overhead in pg_test_timing\n> > down ;)\n> > \n> > But I think it's a real issue. Places where we do, but shouldn't, convert:\n> > \n> > - ExecReScan() - quite painful, we can end up with a lot of those\n> > - InstrStopNode() - adds a good bit of overhead to simple\n> InstrStopNode() doesn't convert in the general case but only for the first\n> tuple or when async. So it goes somewhat hand in hand with ExecReScan().\n\nI think even the first-scan portion is likely noticable for quick queries -\nyou can quickly end up with 5-10 nodes, even for queries processed in the <\n0.1ms range.\n\nOf course it's way worse with rescans / loops.\n\n\n> > - PendingWalStats.wal_write_time - this is particularly bad because it happens\n> > within very contended code\n> > - calls to pgstat_count_buffer_read_time(), pgstat_count_buffer_write_time() -\n> > they can be very frequent\n> > - pgbench.c, as we already discussed\n> > - pg_stat_statements.c\n> > - ...\n> > \n> > These all will get a bit slower when moving to a \"variable\" frequency.\n\n> I wonder if we will be able to measure any of them easily. But given that\n> it's many more places than I had realized and given that the optimized code\n> is not too involved, let's give it a try.\n\nI think at least some should be converted to just accumulate in an\ninstr_time...\n\n\n\n> > What was your approach for avoiding the costly operation? I ended up with a\n> > integer multiplication + shift approximation for the floating point\n> > multiplication (which in turn uses the inverse of the division by the\n> > frequency). To allow for sufficient precision while also avoiding overflows, I\n> > had to make that branch conditional, with a slow path for large numbers of\n> > nanoseconds.\n> \n> It seems like we ended up with the same. I do:\n> \n> sec = ticks / frequency_hz\n> ns� = ticks / frequency_hz * 1,000,000,000\n> ns� = ticks * (1,000,000,000 / frequency_hz)\n> ns� = ticks * (1,000,000 / frequency_khz) <-- now in kilohertz\n> \n> Now, the constant scaling factor in parentheses is typically a floating\n> point number. For example for a frequency of 2.5 GHz it would be 2.5. To\n> work around that we can do something like:\n> \n> ns� = ticks * (1,000,000 * scaler / frequency_khz) / scaler\n> \n> Where scaler is a power-of-2, big enough to maintain enough precision while\n> allowing for a shift to implement the division.\n\nYep, at least quite similar.\n\n\n> The additional multiplication with scaler makes that the maximum range go\n> down, because we must ensure we never overflow. I'm wondering if we cannot\n> pick scaler in such a way that remaining range of cycles is large enough for\n> our use case and we can therefore live without bothering for the overflow\n> case. What would be \"enough\"? 1 year? 10 years? ...\n\nDepending on how low we want to keep the error, I don't think we can:\n\nIf I set the allowed deviation to 10**-9, we end up requiring a shift by 29\nfor common ghz ranges. Clearly 33bits isn't an interesting range.\n\nBut even if you accept a higher error - we don't have *that* much range\navailable. Assuming an uint64, the range is ~584 years. If we want 10 years\nrange, we end up\n\n math.log(((2**64)-1) / (10 * 365 * 60 * 60 * 24 * 10**9), 2)\n ~= 5.87\n\nSo 5 bits available that we could \"use\" for multiply/shift. For something like\n2.5ghz, that'd be ~2% error, clearly not acceptable. And even just a year of\nrange, ends up allowing a failure of 30796s = 8min over a year, still too\nhigh.\n\n\nBut I don't think it's really an issue - normally that branch will never be\ntaken (at least within the memory of the branch predictor), which on modern\nCPUs means it'll just be predicted as not taken. So as long as we tell the\ncompiler what's the likely branch, it should be fine. At least as long as the\nbranch compares with a hardcoded number.\n\n\n> > I think it'd be great - but I'm not sure we're there yet, reliability and\n> > code-complexity wise.\n\n> Thanks to your commits, the diff of the new patch set will be already much\n> smaller and easier to review. What's your biggest concern in terms of\n> reliability?\n\n- the restriction just to linux, that'll make testing harder for some, and\n ends up encoding too much OS dependency\n- I think we need both the barrier and non-barrier variant, otherwise I\n suspect we'll end up with inccuracies we don't want\n- needs lots more documentation about why certain cpuid registers are used\n- cpu microarch dependencies - isn't there, e.g., the case that the scale on\n nehalem has to be different than on later architectures?\n- lack of facility to evaluate how well the different time sources work\n\n\n> > I think it might be worth makign the rdts aspect somewhat\n> > measurable. E.g. allowing pg_test_timing to use both at the same time, and\n> > have it compare elapsed time with both sources of counters.\n> I haven't yet looked into pg_test_timing. I'll do that while including your\n> patches into the new patch set.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 23 Jan 2023 12:26:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-23 18:52:44 +0100, David Geier wrote:\n> One thing I was wondering about: why did you chose to use a signed instead\n> of an unsigned 64-bit integer for the ticks?\n\nThat's been the case since my first post in the thread :). Mainly, it seems\neasier to detect underflow cases during subtraction that way. And the factor\nof 2 in range doesn't change a whole lot.\n\n\n> > > If you have time to look at the pg_test_timing part, it'd be\n> > > appreciated. That's a it larger, and nobody looked at it yet. So I'm a bit\n> > > hesitant to push it.\n> > I haven't yet pushed the pg_test_timing (nor it's small prerequisite)\n> > patch.\n> > \n> > I've attached those two patches. Feel free to include them in your series if\n> > you want, then the CF entry (and thus cfbot) makes sense again...\n> I'll include them in my new patch set and also have a careful look at them.\n\nThanks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 23 Jan 2023 12:30:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 1/23/23 18:41, Andres Freund wrote:\n> If we add it, it probably shouldn't depend on TIMING, but on\n> SUMMARY. Regression test queries showing EXPLAIN ANALYZE output all do\n> something like\n> EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF)\n>\n> the SUMMARY OFF gets rid of the \"top-level\" \"Planning Time\" and \"Execution\n> Time\", whereas the TIMING OFF gets rid of the per-node timing. Those are\n> separate options because per-node timing is problematic performance-wise\n> (right now), but whole-query timing rarely is.\nMakes sense. I wasn't aware of SUMMARY. Let's keep this option in mind, \nin case we'll revisit exposing the clock source in the future.\n> Another, independent, thing worth thinking about: I think we might want to\n> expose both rdtsc and rdtscp. For something like\n> InstrStartNode()/InstrStopNode(), avoiding the \"one-way barrier\" of rdtscp is\n> quite important to avoid changing the query performance. But for measuring\n> whole-query time, we likely want to measure the actual time.\n>\n> It probably won't matter hugely for the whole query time - the out of order\n> window of modern CPUs is large, but not *that* large - but I don't think we\n> can generally assume that.\n\nThat's what I thought as well. I added INSTR_TIME_SET_CURRENT_FAST() and \nfor now call that variant from InstrStartNode(), InstrEndNode() and \npg_test_timing. To do so in InstrEndNode(), I removed \nINSTR_TIME_SET_CURRENT_LAZY(). Otherwise, two variants of that macro \nwould be needed. INSTR_TIME_SET_CURRENT_LAZY() was only used in a single \nplace and the code is more readable that way. INSTR_TIME_SET_CURRENT() \nis called from a bunch of places. I still have to go through all of them \nand see which should be changed to call the _FAST() variant.\n\nAttached is v7 of the patch:\n\n- Rebased on latest master (most importantly on top of the int64 \ninstr_time commits). - Includes two commits from Andres which introduce \nINSTR_TIME_SET_SECONDS(), INSTR_TIME_IS_LT() and WIP to report \npg_test_timing output in nanoseconds. - Converts ticks to nanoseconds \nonly with integer math, while accounting for overflow. - Supports RDTSCP \nvia INSTR_TIME_SET_CURRENT() and introduced \nINSTR_TIME_SET_CURRENT_FAST() which uses RDTSC.\n\nI haven't gotten to the following:\n\n- Looking through all calls to INSTR_TIME_SET_CURRENT() and check if \nthey should be replaced by INSTR_TIME_SET_CURRENT_FAST(). - Reviewing \nAndres commits. Potentially improving on pg_test_timing's output. - \nLooking at enabling RDTSC on more platforms. Is there a minimum set of \nplatforms we would like support for? Windows should be easy. That would \nalso allow to unify the code a little more. - Add more documentation and \ndo more testing around the calls to CPUID. - Profiling and optimizing \nthe code. A quick test showed about 10% improvement over master with \nTIMING ON vs TIMING OFF, when using the test-case from Andres' e-mail \nthat started this thread.\n\nI hope I'll find time to work on these points during the next days.\n\n-- \nDavid Geier\n(ServiceNow)",
"msg_date": "Tue, 24 Jan 2023 14:30:34 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi\n>\n> I think at least some should be converted to just accumulate in an\n> instr_time...\nI think that's for a later patch though?\n> Yep, at least quite similar.\nOK. I coded it up in the latest version of the patch.\n> Depending on how low we want to keep the error, I don't think we can:\n>\n> If I set the allowed deviation to 10**-9, we end up requiring a shift by 29\n> for common ghz ranges. Clearly 33bits isn't an interesting range.\n>\n> But even if you accept a higher error - we don't have *that* much range\n> available. Assuming an uint64, the range is ~584 years. If we want 10 years\n> range, we end up\n>\n> math.log(((2**64)-1) / (10 * 365 * 60 * 60 * 24 * 10**9), 2)\n> ~= 5.87\n>\n> So 5 bits available that we could \"use\" for multiply/shift. For something like\n> 2.5ghz, that'd be ~2% error, clearly not acceptable. And even just a year of\n> range, ends up allowing a failure of 30796s = 8min over a year, still too\n> high.\nThanks for doing the math. Agreed. The latest patch detects overflow and \ncorrectly handles it.\n> But I don't think it's really an issue - normally that branch will never be\n> taken (at least within the memory of the branch predictor), which on modern\n> CPUs means it'll just be predicted as not taken. So as long as we tell the\n> compiler what's the likely branch, it should be fine. At least as long as the\n> branch compares with a hardcoded number.\nYeah. The overflow detection just compares two int64. The \"overflow \nthreshold\" is pre-computed.\n> - the restriction just to linux, that'll make testing harder for some, and\n> ends up encoding too much OS dependency\n> - I think we need both the barrier and non-barrier variant, otherwise I\n> suspect we'll end up with inccuracies we don't want\n> - needs lots more documentation about why certain cpuid registers are used\n> - cpu microarch dependencies - isn't there, e.g., the case that the scale on\n> nehalem has to be different than on later architectures?\n> - lack of facility to evaluate how well the different time sources work\nMakes sense. I carried that list over to my latest e-mail which also \nincludes the patch to have some sort of summary of where we are in a \nsingle place.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Tue, 24 Jan 2023 14:35:45 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 1/23/23 21:30, Andres Freund wrote:\n> That's been the case since my first post in the thread :). Mainly, it seems\n> easier to detect underflow cases during subtraction that way. And the factor\n> of 2 in range doesn't change a whole lot.\nI just realized it the other day :).\n>>>> If you have time to look at the pg_test_timing part, it'd be\n>>>> appreciated. That's a it larger, and nobody looked at it yet. So I'm a bit\n>>>> hesitant to push it.\n>>> I haven't yet pushed the pg_test_timing (nor it's small prerequisite)\n>>> patch.\n>>>\n>>> I've attached those two patches. Feel free to include them in your series if\n>>> you want, then the CF entry (and thus cfbot) makes sense again...\n>> I'll include them in my new patch set and also have a careful look at them.\n\nI reviewed the prerequisite patch which introduces \nINSTR_TIME_SET_SECONDS(), as well as the pg_test_timing patch. Here my \ncomments:\n\n- The prerequisite patch looks good me.\n\n- By default, the test query in the pg_test_timing doc runs serially. \nWhat about adding SET max_parallel_workers_per_gather = 0 to make sure \nit really always does (e.g. on a system with different settings for \nparallel_tuple_cost / parallel_setup_cost)? Otherwise, the numbers will \nbe much more flaky.\n\n- Why have you added a case distinction for diff == 0? Have you \nencountered this case? If so, how? Maybe add a comment.\n\n- To further reduce overhead we could call INSTR_TIME_SET_CURRENT() \nmultiple times. But then again: why do we actually care about the \nper-loop time? Why not instead sum up diff and divide by the number of \niterations to exclude all the overhead in the first place?\n\n- In the computation of the per-loop time in nanoseconds you can now use \nINSTR_TIME_GET_NANOSEC() instead of INSTR_TIME_GET_DOUBLE() * NS_PER_S.\n\nThe rest looks good to me. The rebased patches are part of the patch set \nI sent out yesterday in reply to another mail in this thread.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Thu, 26 Jan 2023 12:21:13 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-24 14:30:34 +0100, David Geier wrote:\n> Attached is v7 of the patch:\n> \n> - Rebased on latest master (most importantly on top of the int64 instr_time\n> commits). - Includes two commits from Andres which introduce\n> INSTR_TIME_SET_SECONDS(), INSTR_TIME_IS_LT() and WIP to report\n> pg_test_timing output in nanoseconds. - Converts ticks to nanoseconds only\n> with integer math, while accounting for overflow. - Supports RDTSCP via\n> INSTR_TIME_SET_CURRENT() and introduced INSTR_TIME_SET_CURRENT_FAST() which\n> uses RDTSC.\n> \n> I haven't gotten to the following:\n> \n> - Looking through all calls to INSTR_TIME_SET_CURRENT() and check if they\n> should be replaced by INSTR_TIME_SET_CURRENT_FAST(). - Reviewing Andres\n> commits. Potentially improving on pg_test_timing's output. - Looking at\n> enabling RDTSC on more platforms. Is there a minimum set of platforms we\n> would like support for? Windows should be easy. That would also allow to\n> unify the code a little more. - Add more documentation and do more testing\n> around the calls to CPUID. - Profiling and optimizing the code. A quick test\n> showed about 10% improvement over master with TIMING ON vs TIMING OFF, when\n> using the test-case from Andres' e-mail that started this thread.\n> \n> I hope I'll find time to work on these points during the next days.\n\nThis fails to build on several platforms:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3751\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Feb 2023 10:12:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nOn 2/7/23 19:12, Andres Freund wrote:\n> This fails to build on several platforms:\n>\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3751\n\nI think I fixed the compilation errors. It was due to a few variables \nbeing declared under\n\n#if defined(__x86_64__) && defined(__linux__)\n\nwhile being used also under non x86 Linux.\n\nI also removed again the code to obtain the TSC frequency under \nhypervisors because the TSC is usually emulated and therefore no faster \nthan clock_gettime() anyways. So we now simply fallback to \nclock_gettime() on hypervisors when we cannot obtain the frequency via \nleaf 0x16.\n\nBeyond that I reviewed the first two patches a while ago in [1]. I hope \nwe can progress with them to further reduce the size of this patch set.\n\n[1] \nhttps://www.postgresql.org/message-id/3ac157f7-085d-e071-45fc-b87cd306360c%40gmail.com \n\n\n-- \nDavid Geier\n(ServiceNow)",
"msg_date": "Tue, 14 Feb 2023 12:11:01 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi!\n\nOn 2/14/23 12:11, David Geier wrote:\n> Hi,\n>\n> I think I fixed the compilation errors. It was due to a few variables \n> being declared under\n>\n> #if defined(__x86_64__) && defined(__linux__)\n>\n> while being used also under non x86 Linux.\n>\n> I also removed again the code to obtain the TSC frequency under \n> hypervisors because the TSC is usually emulated and therefore no \n> faster than clock_gettime() anyways. So we now simply fallback to \n> clock_gettime() on hypervisors when we cannot obtain the frequency via \n> leaf 0x16.\n>\n> Beyond that I reviewed the first two patches a while ago in [1]. I \n> hope we can progress with them to further reduce the size of this \n> patch set.\n>\n> [1] \n> https://www.postgresql.org/message-id/3ac157f7-085d-e071-45fc-b87cd306360c%40gmail.com \n>\n>\nIt still fails.\n\nI'll get Cirrus-CI working on my own Github fork so I can make sure it \nreally compiles on all platforms before I submit a new version.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Tue, 14 Feb 2023 13:48:56 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi!\n\nOn 2/14/23 13:48, David Geier wrote:\n>\n> It still fails.\n>\n> I'll get Cirrus-CI working on my own Github fork so I can make sure it \n> really compiles on all platforms before I submit a new version.\n\nIt took some time until Cirrus CI allowed me to run tests against my new \nGitHub account (there's a 3 days freeze to avoid people from getting \nCirrus CI nodes to mine bitcoins :-D). Attached now the latest patch \nwhich passes builds, rebased on latest master.\n\nI also reviewed the first two patches a while ago in [1]. I hope we can \nprogress with them to further reduce the size of this patch set.\n\nBeyond that: I could work on support for more OSs (e.g. starting with \nWindows). Is there appetite for that or do we rather want to instead \nstart with a smaller patch?\n\n[1] \nhttps://www.postgresql.org/message-id/3ac157f7-085d-e071-45fc-b87cd306360c%40gmail.com\n\n-- \nDavid Geier\n(ServiceNow)",
"msg_date": "Mon, 20 Feb 2023 11:36:32 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Mon, 20 Feb 2023 at 16:06, David Geier <geidav.pg@gmail.com> wrote:\n>\n> Hi!\n>\n> On 2/14/23 13:48, David Geier wrote:\n> >\n> > It still fails.\n> >\n> > I'll get Cirrus-CI working on my own Github fork so I can make sure it\n> > really compiles on all platforms before I submit a new version.\n>\n> It took some time until Cirrus CI allowed me to run tests against my new\n> GitHub account (there's a 3 days freeze to avoid people from getting\n> Cirrus CI nodes to mine bitcoins :-D). Attached now the latest patch\n> which passes builds, rebased on latest master.\n>\n> I also reviewed the first two patches a while ago in [1]. I hope we can\n> progress with them to further reduce the size of this patch set.\n>\n> Beyond that: I could work on support for more OSs (e.g. starting with\n> Windows). Is there appetite for that or do we rather want to instead\n> start with a smaller patch?\n\nAre we planning to continue on this and take it further?\nI'm seeing that there has been no activity in this thread for nearly 1\nyear now, I'm planning to close this in the current commitfest unless\nsomeone is planning to take it forward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 09:03:47 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "On Sat, 20 Jan 2024 at 09:03, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 20 Feb 2023 at 16:06, David Geier <geidav.pg@gmail.com> wrote:\n> >\n> > Hi!\n> >\n> > On 2/14/23 13:48, David Geier wrote:\n> > >\n> > > It still fails.\n> > >\n> > > I'll get Cirrus-CI working on my own Github fork so I can make sure it\n> > > really compiles on all platforms before I submit a new version.\n> >\n> > It took some time until Cirrus CI allowed me to run tests against my new\n> > GitHub account (there's a 3 days freeze to avoid people from getting\n> > Cirrus CI nodes to mine bitcoins :-D). Attached now the latest patch\n> > which passes builds, rebased on latest master.\n> >\n> > I also reviewed the first two patches a while ago in [1]. I hope we can\n> > progress with them to further reduce the size of this patch set.\n> >\n> > Beyond that: I could work on support for more OSs (e.g. starting with\n> > Windows). Is there appetite for that or do we rather want to instead\n> > start with a smaller patch?\n>\n> Are we planning to continue on this and take it further?\n> I'm seeing that there has been no activity in this thread for nearly 1\n> year now, I'm planning to close this in the current commitfest unless\n> someone is planning to take it forward.\n\nSince the author or no one else showed interest in taking it forward\nand the patch had no activity for more than 1 year, I have changed the\nstatus to RWF. Feel free to add a new CF entry when someone is\nplanning to resume work more actively.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 2 Feb 2024 00:14:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
},
{
"msg_contents": "Hi,\n\nAt some point this patch switched from rdtsc to rdtscp, which imo largely\nnegates the point of it. What lead to that?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Jun 2024 11:52:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?"
}
] |
[
{
"msg_contents": "I scraped the buildfarm's compiler warnings today, as I do from\ntime to time, and I noticed that half a dozen animals that normally\ndon't report any uninitialized-variable warnings are complaining\nabout \"curitup\" in _bt_doinsert. We traditionally ignore such warnings\nfrom compilers that have demonstrated track records of being stupid\nabout it, but when a reasonably modern compiler shows such a warning\nI think we ought to suppress it. Right now the counts of\nuninitialized-variable warnings in HEAD builds are\n\n 1 calliphoridae\n 1 chipmunk\n 1 coypu\n 1 culicidae\n 2 curculio\n 1 frogfish\n 25 locust\n 24 prairiedog\n\n(curculio is additionally whining about \"curitemid\" in the same function.)\nSo you can see that this one issue has greatly expanded the set of\ncompilers that are unhappy. I can see their point too -- it requires\nsome study to be sure we are assigning curitup before dereferencing it.\n\nThe simplest fix would be to just initialize curitup to NULL in its\ndeclaration. But perhaps there's a better way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Jun 2020 12:17:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Uninitialized-variable warnings in nbtinsert.c"
},
{
"msg_contents": "On Sat, Jun 13, 2020 at 9:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I scraped the buildfarm's compiler warnings today, as I do from\n> time to time, and I noticed that half a dozen animals that normally\n> don't report any uninitialized-variable warnings are complaining\n> about \"curitup\" in _bt_doinsert.\n\n(Clearly you meant _bt_check_unique(), not _bt_doinsert().)\n\n> The simplest fix would be to just initialize curitup to NULL in its\n> declaration. But perhaps there's a better way.\n\nThanks for bringing this to my attention. I'll push a commit that\ninitializes curitup shortly, targeting both the v13 branch and the\nmaster branch.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 13 Jun 2020 09:29:18 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Uninitialized-variable warnings in nbtinsert.c"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sat, Jun 13, 2020 at 9:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I scraped the buildfarm's compiler warnings today, as I do from\n>> time to time, and I noticed that half a dozen animals that normally\n>> don't report any uninitialized-variable warnings are complaining\n>> about \"curitup\" in _bt_doinsert.\n\n> (Clearly you meant _bt_check_unique(), not _bt_doinsert().)\n\nAh, right. I was looking at calliphoridae's complaint when I wrote that:\n\nIn file included from /home/andres/build/buildfarm-calliphoridae/HEAD/pgsql.build/../pgsql/src/backend/access/nbtree/nbtinsert.c:18:\n/home/andres/build/buildfarm-calliphoridae/HEAD/pgsql.build/../pgsql/src/backend/access/nbtree/nbtinsert.c: In function \\xe2\\x80\\x98_bt_doinsert\\xe2\\x80\\x99:\n\nbut it must have inlined some stuff first. (A lot of the other\ncomplainers are fingering inline functions in nbtree.h, which\nis even less helpful.)\n\n> Thanks for bringing this to my attention. I'll push a commit that\n> initializes curitup shortly, targeting both the v13 branch and the\n> master branch.\n\nThanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Jun 2020 12:56:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Uninitialized-variable warnings in nbtinsert.c"
}
] |
[
{
"msg_contents": "I happened to notice today that, while the rest of the buildfarm is free\nof implicit-fallthrough warnings, jacana is emitting a whole boatload of\nthem. It looks like it must have a different idea of which spellings of\nthe \"fall through\" comment are allowed. Could you check its documentation\nto see what it claims to allow?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Jun 2020 12:25:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "jacana vs -Wimplicit-fallthrough"
},
{
"msg_contents": "\nOn 6/13/20 12:25 PM, Tom Lane wrote:\n> I happened to notice today that, while the rest of the buildfarm is free\n> of implicit-fallthrough warnings, jacana is emitting a whole boatload of\n> them. It looks like it must have a different idea of which spellings of\n> the \"fall through\" comment are allowed. Could you check its documentation\n> to see what it claims to allow?\n>\n> \t\t\t\n\n\nThere doesn't seem to be any docco with it. What's odd is that fairywren\nis supposedly using the identical compiler, but it's on msys2 while\njacana is msys1. I can't see why that should make a difference, though.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 13 Jun 2020 16:16:51 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: jacana vs -Wimplicit-fallthrough"
}
] |
[
{
"msg_contents": "Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL\nmaillist (\nhttps://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer),\nI have prepared an initial patch for COPY command progress reporting.\n\nFew examples first:\n\n\"COPY (SELECT * FROM test) TO '/tmp/ids';\"\n\nyr=# SELECT * from pg_stat_progress_copy;\n pid | datid | datname | relid | direction | file | program |\nlines_processed | file_bytes_processed\n---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n 3347126 | 16384 | yr | 0 | TO | t | f |\n3529943 | 24906226\n(1 row)\n\n\"COPY test FROM '/tmp/ids';\n\nyr=# SELECT * from pg_stat_progress_copy;\n pid | datid | datname | relid | direction | file | program |\nlines_processed | file_bytes_processed\n---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n 3347126 | 16384 | yr | 16385 | FROM | t | f |\n121591999 | 957218816\n(1 row)\n\nColumns are inspired by CREATE INDEX progress report system view.\n\npid - integer - PID of backend\ndatid - oid - OID of related database\ndatname - name - name of related database (this seems redundant, since oid\nshould be enough, but it is the same in CREATE INDEX)\nrelid - oid - oid of table related to COPY command, when not known (for\nexample when copying to file, it is 0)\ndirection - text - one of \"FROM\" or \"TO\" depends on command used\nfile - bool - is file is used?\nprogram - bool - is program used?\nlines_processed - bigint - amount of processed lines, works for both\ndirections (FROM/TO)\nfile_bytes_processed - amount of bytes processed when file is used\n(otherwise 0), works for both direction (\nFROM/TO) when file is used (file = t)\n\nPatch is attached and can be found also at\nhttps://github.com/simi/postgres/pull/5.\n\nDiff version: https://github.com/simi/postgres/pull/5.diff\nPatch version: https://github.com/simi/postgres/pull/5.patch\n\nI havefew initial notes and questions.\n\nI'm using ftell to get current position in file to populate\nfile_bytes_processed without error handling (ftell can return -1L and also\npopulate errno on problems).\n\n1. Is that a good way to get progress of file processing?\n2. Is it safe in given context to not care about errors? If not, what to do\non error?\n\nSome columns are not populated on certain COPY commands. For example when a\nfile is not used, file_bytes_processed is set to 0. Would it be better to\nuse NULL instead when the column is not related to the current command?\nSame problem is for relid column.\n\nI have not found any tests for progress reporting. Are there any? It would\nneed two backends running (one running COPY, one checking output of report\nview). Is there any similar test I can inspire at? In theory, it should be\npossible to use dblink_send_query to run async COPY command in the\nbackground.\n\nMy initial (attached) patch also doesn't introduce documentation for this\nsystem view. I can add that later once this patch is finalized (if that\nhappens).",
"msg_date": "Sun, 14 Jun 2020 14:32:33 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "Hi Josef,\n\nOn Sun, Jun 14, 2020 at 02:32:33PM +0200, Josef Šimánek wrote:\n> Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL\n> maillist (\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer),\n> I have prepared an initial patch for COPY command progress reporting.\n\nSounds like a good idea to me.\n\n> I have not found any tests for progress reporting. Are there any? It would\n> need two backends running (one running COPY, one checking output of report\n> view). Is there any similar test I can inspire at? In theory, it should be\n> possible to use dblink_send_query to run async COPY command in the\n> background.\n\nWe don't have any tests in core. I think that making deterministic\ntest cases is rather tricky here as long as we don't have a more\nadvanced testing framework that allows is to lock certain code paths\nand keep around an expected state until a second session comes around\nand looks at the progress catalog (even that would need adding more\ncode to core to mark the extra point looked at). So I think that it is\nfine to not focus on that for this feature. The important parts are\nthe choice of the progress points and the data sent to MyProc, and\nboth should be chosen wisely.\n\n> My initial (attached) patch also doesn't introduce documentation for this\n> system view. I can add that later once this patch is finalized (if that\n> happens).\n\nYou may want to add it to the next commit fest:\nhttps://commitfest.postgresql.org/28/\nDocumentation is necessary, and having some would ease reviews.\n--\nMichael",
"msg_date": "Mon, 15 Jun 2020 09:18:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "\n\nOn 2020/06/14 21:32, Josef Šimánek wrote:\n> Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL maillist (https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer), I have prepared an initial patch for COPY command progress reporting.\n\nSounds nice!\n\n\n> file - bool - is file is used?\n> program - bool - is program used?\n\nAre these fields really necessary in a progress view?\nWhat values are reported when STDOUT/STDIN is specified in COPY command?\n\n\n> file_bytes_processed - amount of bytes processed when file is used (otherwise 0), works for both direction (\n> FROM/TO) when file is used (file = t)\n\nWhat value is reported when STDOUT/STDIN is specified in COPY command?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 15 Jun 2020 13:38:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "> I'm using ftell to get current position in file to populate file_bytes_processed without error handling (ftell can return -1L and also populate errno on problems).\n>\n> 1. Is that a good way to get progress of file processing?\n\nIMO, it's better to handle the error cases. One possible case where\nftell can return -1 and set errno is when the total bytes processed is\nmore than LONG_MAX.\n\nWill your patch handle file_bytes_processed reporting for COPY FROM\nSTDIN cases? For this case, ftell can't be used.\n\nInstead of using ftell and worrying about the errors, a simple\napproach could be to have a uint64 variable in CopyStateData to track\nthe number of bytes read whenever CopyGetData is called. This approach\ncan also handle the case of COPY FROM STDIN.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jun 2020 11:04:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "po 15. 6. 2020 v 2:18 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> Hi Josef,\n>\n> On Sun, Jun 14, 2020 at 02:32:33PM +0200, Josef Šimánek wrote:\n> > Hello, as proposed by Pavel Stěhule and discussed on local czech\n> PostgreSQL\n> > maillist (\n> >\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer\n> ),\n> > I have prepared an initial patch for COPY command progress reporting.\n>\n> Sounds like a good idea to me.\n>\n\nGreat. I will continue working on this.\n\n\n> > I have not found any tests for progress reporting. Are there any? It\n> would\n> > need two backends running (one running COPY, one checking output of\n> report\n> > view). Is there any similar test I can inspire at? In theory, it should\n> be\n> > possible to use dblink_send_query to run async COPY command in the\n> > background.\n>\n> We don't have any tests in core. I think that making deterministic\n> test cases is rather tricky here as long as we don't have a more\n> advanced testing framework that allows is to lock certain code paths\n> and keep around an expected state until a second session comes around\n> and looks at the progress catalog (even that would need adding more\n> code to core to mark the extra point looked at). So I think that it is\n> fine to not focus on that for this feature. The important parts are\n> the choice of the progress points and the data sent to MyProc, and\n> both should be chosen wisely.\n>\n\nThanks for the info. I'm focusing exactly at looking for right spots to\nreport the progress. I'll attach new patch with better places and\nsupporting more options of reporting (including STDIN, STDOUT) soon and\nalso I'll try to add it to commitfest.\n\n\n>\n> > My initial (attached) patch also doesn't introduce documentation for this\n> > system view. I can add that later once this patch is finalized (if that\n> > happens).\n>\n> You may want to add it to the next commit fest:\n> https://commitfest.postgresql.org/28/\n> Documentation is necessary, and having some would ease reviews.\n> --\n> Michael\n>\n\npo 15. 6. 2020 v 2:18 odesílatel Michael Paquier <michael@paquier.xyz> napsal:Hi Josef,\n\nOn Sun, Jun 14, 2020 at 02:32:33PM +0200, Josef Šimánek wrote:\n> Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL\n> maillist (\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer),\n> I have prepared an initial patch for COPY command progress reporting.\n\nSounds like a good idea to me.Great. I will continue working on this. \n> I have not found any tests for progress reporting. Are there any? It would\n> need two backends running (one running COPY, one checking output of report\n> view). Is there any similar test I can inspire at? In theory, it should be\n> possible to use dblink_send_query to run async COPY command in the\n> background.\n\nWe don't have any tests in core. I think that making deterministic\ntest cases is rather tricky here as long as we don't have a more\nadvanced testing framework that allows is to lock certain code paths\nand keep around an expected state until a second session comes around\nand looks at the progress catalog (even that would need adding more\ncode to core to mark the extra point looked at). So I think that it is\nfine to not focus on that for this feature. The important parts are\nthe choice of the progress points and the data sent to MyProc, and\nboth should be chosen wisely.Thanks for the info. I'm focusing exactly at looking for right spots to report the progress. I'll attach new patch with better places and supporting more options of reporting (including STDIN, STDOUT) soon and also I'll try to add it to commitfest. \n\n> My initial (attached) patch also doesn't introduce documentation for this\n> system view. I can add that later once this patch is finalized (if that\n> happens).\n\nYou may want to add it to the next commit fest:\nhttps://commitfest.postgresql.org/28/\nDocumentation is necessary, and having some would ease reviews.\n--\nMichael",
"msg_date": "Sun, 21 Jun 2020 13:31:16 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "po 15. 6. 2020 v 6:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com>\nnapsal:\n\n>\n>\n> On 2020/06/14 21:32, Josef Šimánek wrote:\n> > Hello, as proposed by Pavel Stěhule and discussed on local czech\n> PostgreSQL maillist (\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer),\n> I have prepared an initial patch for COPY command progress reporting.\n>\n> Sounds nice!\n>\n>\n> > file - bool - is file is used?\n> > program - bool - is program used?\n>\n> Are these fields really necessary in a progress view?\n> What values are reported when STDOUT/STDIN is specified in COPY command?\n>\n\nFor STDOUT and STDIN file is true and program is false.\n\n\n> > file_bytes_processed - amount of bytes processed when file is used\n> (otherwise 0), works for both direction (\n> > FROM/TO) when file is used (file = t)\n>\n> What value is reported when STDOUT/STDIN is specified in COPY command?\n\n\nFor my first patch nothing was reported on STDOUT/STDIN usage. I'll attach\nnew patch soon supporting those as well.\n\n\n>\n>\n\n\n> Regards,\n>\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\npo 15. 6. 2020 v 6:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n\nOn 2020/06/14 21:32, Josef Šimánek wrote:\n> Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL maillist (https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer), I have prepared an initial patch for COPY command progress reporting.\n\nSounds nice!\n\n\n> file - bool - is file is used?\n> program - bool - is program used?\n\nAre these fields really necessary in a progress view?\nWhat values are reported when STDOUT/STDIN is specified in COPY command?For STDOUT and STDIN file is true and program is false. \n> file_bytes_processed - amount of bytes processed when file is used (otherwise 0), works for both direction (\n> FROM/TO) when file is used (file = t)\n\nWhat value is reported when STDOUT/STDIN is specified in COPY command?For my first patch nothing was reported on STDOUT/STDIN usage. I'll attach new patch soon supporting those as well. \nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Sun, 21 Jun 2020 13:33:01 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "po 15. 6. 2020 v 2:18 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> Hi Josef,\n>\n> On Sun, Jun 14, 2020 at 02:32:33PM +0200, Josef Šimánek wrote:\n> > Hello, as proposed by Pavel Stěhule and discussed on local czech\n> PostgreSQL\n> > maillist (\n> >\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer\n> ),\n> > I have prepared an initial patch for COPY command progress reporting.\n>\n> Sounds like a good idea to me.\n>\n> > I have not found any tests for progress reporting. Are there any? It\n> would\n> > need two backends running (one running COPY, one checking output of\n> report\n> > view). Is there any similar test I can inspire at? In theory, it should\n> be\n> > possible to use dblink_send_query to run async COPY command in the\n> > background.\n>\n> We don't have any tests in core. I think that making deterministic\n> test cases is rather tricky here as long as we don't have a more\n> advanced testing framework that allows is to lock certain code paths\n> and keep around an expected state until a second session comes around\n> and looks at the progress catalog (even that would need adding more\n> code to core to mark the extra point looked at). So I think that it is\n> fine to not focus on that for this feature. The important parts are\n> the choice of the progress points and the data sent to MyProc, and\n> both should be chosen wisely.\n>\n> > My initial (attached) patch also doesn't introduce documentation for this\n> > system view. I can add that later once this patch is finalized (if that\n> > happens).\n>\n> You may want to add it to the next commit fest:\n> https://commitfest.postgresql.org/28/\n> Documentation is necessary, and having some would ease reviews.\n>\n\nI have added documentation, more code comments and I'll upload patch to\ncommit fest.\n\n\n> --\n> Michael\n>\n\npo 15. 6. 2020 v 2:18 odesílatel Michael Paquier <michael@paquier.xyz> napsal:Hi Josef,\n\nOn Sun, Jun 14, 2020 at 02:32:33PM +0200, Josef Šimánek wrote:\n> Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL\n> maillist (\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer),\n> I have prepared an initial patch for COPY command progress reporting.\n\nSounds like a good idea to me.\n\n> I have not found any tests for progress reporting. Are there any? It would\n> need two backends running (one running COPY, one checking output of report\n> view). Is there any similar test I can inspire at? In theory, it should be\n> possible to use dblink_send_query to run async COPY command in the\n> background.\n\nWe don't have any tests in core. I think that making deterministic\ntest cases is rather tricky here as long as we don't have a more\nadvanced testing framework that allows is to lock certain code paths\nand keep around an expected state until a second session comes around\nand looks at the progress catalog (even that would need adding more\ncode to core to mark the extra point looked at). So I think that it is\nfine to not focus on that for this feature. The important parts are\nthe choice of the progress points and the data sent to MyProc, and\nboth should be chosen wisely.\n\n> My initial (attached) patch also doesn't introduce documentation for this\n> system view. I can add that later once this patch is finalized (if that\n> happens).\n\nYou may want to add it to the next commit fest:\nhttps://commitfest.postgresql.org/28/\nDocumentation is necessary, and having some would ease reviews.I have added documentation, more code comments and I'll upload patch to commit fest. \n--\nMichael",
"msg_date": "Sun, 21 Jun 2020 13:33:46 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "po 15. 6. 2020 v 7:34 odesílatel Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> napsal:\n\n> > I'm using ftell to get current position in file to populate\n> file_bytes_processed without error handling (ftell can return -1L and also\n> populate errno on problems).\n> >\n> > 1. Is that a good way to get progress of file processing?\n>\n> IMO, it's better to handle the error cases. One possible case where\n> ftell can return -1 and set errno is when the total bytes processed is\n> more than LONG_MAX.\n>\n> Will your patch handle file_bytes_processed reporting for COPY FROM\n> STDIN cases? For this case, ftell can't be used.\n>\n> Instead of using ftell and worrying about the errors, a simple\n> approach could be to have a uint64 variable in CopyStateData to track\n> the number of bytes read whenever CopyGetData is called. This approach\n> can also handle the case of COPY FROM STDIN.\n>\n\nThanks for suggestion. I used this approach and latest patch supports both\nSTDIN and STDOUT now.\n\n\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\npo 15. 6. 2020 v 7:34 odesílatel Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> napsal:> I'm using ftell to get current position in file to populate file_bytes_processed without error handling (ftell can return -1L and also populate errno on problems).\n>\n> 1. Is that a good way to get progress of file processing?\n\nIMO, it's better to handle the error cases. One possible case where\nftell can return -1 and set errno is when the total bytes processed is\nmore than LONG_MAX.\n\nWill your patch handle file_bytes_processed reporting for COPY FROM\nSTDIN cases? For this case, ftell can't be used.\n\nInstead of using ftell and worrying about the errors, a simple\napproach could be to have a uint64 variable in CopyStateData to track\nthe number of bytes read whenever CopyGetData is called. This approach\ncan also handle the case of COPY FROM STDIN.Thanks for suggestion. I used this approach and latest patch supports both STDIN and STDOUT now. \nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 21 Jun 2020 13:34:29 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "Thanks for all comments. I have updated code to support more options\n(including STDIN/STDOUT) and added some documentation.\n\nPatch is attached and can be found also at\nhttps://github.com/simi/postgres/pull/5.\n\nDiff version: https://github.com/simi/postgres/pull/5.diff\nPatch version: https://github.com/simi/postgres/pull/5.patch\n\nI'm also attaching screenshot of HTML documentation and html documentation\nfile.\n\nI'll do my best to get this to commitfest now.\n\nne 14. 6. 2020 v 14:32 odesílatel Josef Šimánek <josef.simanek@gmail.com>\nnapsal:\n\n> Hello, as proposed by Pavel Stěhule and discussed on local czech\n> PostgreSQL maillist (\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer),\n> I have prepared an initial patch for COPY command progress reporting.\n>\n> Few examples first:\n>\n> \"COPY (SELECT * FROM test) TO '/tmp/ids';\"\n>\n> yr=# SELECT * from pg_stat_progress_copy;\n> pid | datid | datname | relid | direction | file | program |\n> lines_processed | file_bytes_processed\n>\n> ---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n> 3347126 | 16384 | yr | 0 | TO | t | f |\n> 3529943 | 24906226\n> (1 row)\n>\n> \"COPY test FROM '/tmp/ids';\n>\n> yr=# SELECT * from pg_stat_progress_copy;\n> pid | datid | datname | relid | direction | file | program |\n> lines_processed | file_bytes_processed\n>\n> ---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n> 3347126 | 16384 | yr | 16385 | FROM | t | f |\n> 121591999 | 957218816\n> (1 row)\n>\n> Columns are inspired by CREATE INDEX progress report system view.\n>\n> pid - integer - PID of backend\n> datid - oid - OID of related database\n> datname - name - name of related database (this seems redundant, since oid\n> should be enough, but it is the same in CREATE INDEX)\n> relid - oid - oid of table related to COPY command, when not known (for\n> example when copying to file, it is 0)\n> direction - text - one of \"FROM\" or \"TO\" depends on command used\n> file - bool - is file is used?\n> program - bool - is program used?\n> lines_processed - bigint - amount of processed lines, works for both\n> directions (FROM/TO)\n> file_bytes_processed - amount of bytes processed when file is used\n> (otherwise 0), works for both direction (\n> FROM/TO) when file is used (file = t)\n>\n> Patch is attached and can be found also at\n> https://github.com/simi/postgres/pull/5.\n>\n> Diff version: https://github.com/simi/postgres/pull/5.diff\n> Patch version: https://github.com/simi/postgres/pull/5.patch\n>\n> I havefew initial notes and questions.\n>\n> I'm using ftell to get current position in file to populate\n> file_bytes_processed without error handling (ftell can return -1L and also\n> populate errno on problems).\n>\n> 1. Is that a good way to get progress of file processing?\n> 2. Is it safe in given context to not care about errors? If not, what to\n> do on error?\n>\n> Some columns are not populated on certain COPY commands. For example when\n> a file is not used, file_bytes_processed is set to 0. Would it be better to\n> use NULL instead when the column is not related to the current command?\n> Same problem is for relid column.\n>\n> I have not found any tests for progress reporting. Are there any? It would\n> need two backends running (one running COPY, one checking output of report\n> view). Is there any similar test I can inspire at? In theory, it should be\n> possible to use dblink_send_query to run async COPY command in the\n> background.\n>\n> My initial (attached) patch also doesn't introduce documentation for this\n> system view. I can add that later once this patch is finalized (if that\n> happens).\n>",
"msg_date": "Sun, 21 Jun 2020 13:40:34 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "\n\nOn 2020/06/21 20:33, Josef Šimánek wrote:\n> \n> \n> po 15. 6. 2020 v 6:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n> On 2020/06/14 21:32, Josef Šimánek wrote:\n> > Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL maillist (https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer), I have prepared an initial patch for COPY command progress reporting.\n> \n> Sounds nice!\n> \n> \n> > file - bool - is file is used?\n> > program - bool - is program used?\n> \n> Are these fields really necessary in a progress view?\n> What values are reported when STDOUT/STDIN is specified in COPY command?\n> \n> \n> For STDOUT and STDIN file is true and program is false.\n\nCould you tell me why these columns are necessary in *progress* view?\nIf we want to see what copy command is actually running, we can see\npg_stat_activity, instead. For example,\n\n SELECT pc.*, a.query FROM pg_stat_progress_copy pc, pg_stat_activity a WHERE pc.pid = a.pid;\n\n> \n> > file_bytes_processed - amount of bytes processed when file is used (otherwise 0), works for both direction (\n> > FROM/TO) when file is used (file = t)\n> \n> What value is reported when STDOUT/STDIN is specified in COPY command?\n> \n> \n> For my first patch nothing was reported on STDOUT/STDIN usage. I'll attach new patch soon supporting those as well.\n\nThanks for the patch!\n\nWith the patch, pg_stat_progress_copy seems to report the progress of\nthe processing on file_fdw. Is this intentional?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 22 Jun 2020 11:48:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 5:11 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>\n> Thanks for all comments. I have updated code to support more options (including STDIN/STDOUT) and added some documentation.\n>\n> Patch is attached and can be found also at https://github.com/simi/postgres/pull/5.\n>\n> Diff version: https://github.com/simi/postgres/pull/5.diff\n> Patch version: https://github.com/simi/postgres/pull/5.patch\n>\n> I'm also attaching screenshot of HTML documentation and html documentation file.\n>\n> I'll do my best to get this to commitfest now.\n>\n> ne 14. 6. 2020 v 14:32 odesílatel Josef Šimánek <josef.simanek@gmail.com> napsal:\n>>\n>> Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL maillist (https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer), I have prepared an initial patch for COPY command progress reporting.\n>>\n>> Few examples first:\n>>\n>> \"COPY (SELECT * FROM test) TO '/tmp/ids';\"\n>>\n>> yr=# SELECT * from pg_stat_progress_copy;\n>> pid | datid | datname | relid | direction | file | program | lines_processed | file_bytes_processed\n>> ---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n>> 3347126 | 16384 | yr | 0 | TO | t | f | 3529943 | 24906226\n>> (1 row)\n>>\n>> \"COPY test FROM '/tmp/ids';\n>>\n>> yr=# SELECT * from pg_stat_progress_copy;\n>> pid | datid | datname | relid | direction | file | program | lines_processed | file_bytes_processed\n>> ---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n>> 3347126 | 16384 | yr | 16385 | FROM | t | f | 121591999 | 957218816\n>> (1 row)\n>>\n>> Columns are inspired by CREATE INDEX progress report system view.\n>>\n>> pid - integer - PID of backend\n>> datid - oid - OID of related database\n>> datname - name - name of related database (this seems redundant, since oid should be enough, but it is the same in CREATE INDEX)\n>> relid - oid - oid of table related to COPY command, when not known (for example when copying to file, it is 0)\n>> direction - text - one of \"FROM\" or \"TO\" depends on command used\n>> file - bool - is file is used?\n>> program - bool - is program used?\n>> lines_processed - bigint - amount of processed lines, works for both directions (FROM/TO)\n>> file_bytes_processed - amount of bytes processed when file is used (otherwise 0), works for both direction (\n>> FROM/TO) when file is used (file = t)\n>>\n>> Patch is attached and can be found also at https://github.com/simi/postgres/pull/5.\n>>\n\nFew comments:\n@@ -713,6 +714,8 @@ CopyGetData(CopyState cstate, void *databuf, int\nminread, int maxread)\n break;\n }\n\n+ CopyUpdateBytesProgress(cstate, bytesread);\n+\n return bytesread;\n }\n\nThis is actually the read data, actual processing will happen later\nlike in CopyReadLineText, it would be better if\nCopyUpdateBytesProgress is done later, if not it will give the same\nvalue even though it does multiple inserts on the table.\nlines_processed will keep getting updated but file_bytes_processed\nwill not be updated.\n\n +pg_stat_progress_copy| SELECT s.pid,\n+ s.datid,\n+ d.datname,\n+ s.relid,\n+ CASE s.param1\n+ WHEN 0 THEN 'TO'::text\n+ WHEN 1 THEN 'FROM'::text\n+ ELSE NULL::text\n+ END AS direction,\n+ ((s.param2)::integer)::boolean AS file,\n+ ((s.param3)::integer)::boolean AS program,\n+ s.param4 AS lines_processed,\n+ s.param5 AS file_bytes_processed\n\nYou could include pg_size_pretty for s.param5 like\npg_size_pretty(S.param5) AS bytes_processed, it will be easier for\nusers to understand bytes_processed when the data size increases.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Jun 2020 12:45:19 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "po 22. 6. 2020 v 4:48 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com>\nnapsal:\n\n>\n>\n> On 2020/06/21 20:33, Josef Šimánek wrote:\n> >\n> >\n> > po 15. 6. 2020 v 6:39 odesílatel Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> >\n> >\n> >\n> > On 2020/06/14 21:32, Josef Šimánek wrote:\n> > > Hello, as proposed by Pavel Stěhule and discussed on local czech\n> PostgreSQL maillist (\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer),\n> I have prepared an initial patch for COPY command progress reporting.\n> >\n> > Sounds nice!\n> >\n> >\n> > > file - bool - is file is used?\n> > > program - bool - is program used?\n> >\n> > Are these fields really necessary in a progress view?\n> > What values are reported when STDOUT/STDIN is specified in COPY\n> command?\n> >\n> >\n> > For STDOUT and STDIN file is true and program is false.\n>\n> Could you tell me why these columns are necessary in *progress* view?\n> If we want to see what copy command is actually running, we can see\n> pg_stat_activity, instead. For example,\n>\n> SELECT pc.*, a.query FROM pg_stat_progress_copy pc, pg_stat_activity\n> a WHERE pc.pid = a.pid;\n>\n\nIf that doesn't make any sense, I can remove those. I have not strong\nopinion about those values. Those were just around when I was looking for\npossible values to include in the progress report.\n\n>\n> > > file_bytes_processed - amount of bytes processed when file is\n> used (otherwise 0), works for both direction (\n> > > FROM/TO) when file is used (file = t)\n> >\n> > What value is reported when STDOUT/STDIN is specified in COPY\n> command?\n> >\n> >\n> > For my first patch nothing was reported on STDOUT/STDIN usage. I'll\n> attach new patch soon supporting those as well.\n>\n> Thanks for the patch!\n>\n> With the patch, pg_stat_progress_copy seems to report the progress of\n> the processing on file_fdw. Is this intentional?\n>\n\nEvery action using internally COPY will be included in the progress report\nview.\nI have spotted for example pg_dump does that and is reported there as well.\nI do not see any problem regarding this. For pg_dump it is consistent with\n\"pg_stat_activity\" reporting COPY command in the query field.\n\n\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\npo 22. 6. 2020 v 4:48 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n\nOn 2020/06/21 20:33, Josef Šimánek wrote:\n> \n> \n> po 15. 6. 2020 v 6:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n> On 2020/06/14 21:32, Josef Šimánek wrote:\n> > Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL maillist (https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer), I have prepared an initial patch for COPY command progress reporting.\n> \n> Sounds nice!\n> \n> \n> > file - bool - is file is used?\n> > program - bool - is program used?\n> \n> Are these fields really necessary in a progress view?\n> What values are reported when STDOUT/STDIN is specified in COPY command?\n> \n> \n> For STDOUT and STDIN file is true and program is false.\n\nCould you tell me why these columns are necessary in *progress* view?\nIf we want to see what copy command is actually running, we can see\npg_stat_activity, instead. For example,\n\n SELECT pc.*, a.query FROM pg_stat_progress_copy pc, pg_stat_activity a WHERE pc.pid = a.pid; If that doesn't make any sense, I can remove those. I have not strong opinion about those values. Those were just around when I was looking for possible values to include in the progress report.\n> \n> > file_bytes_processed - amount of bytes processed when file is used (otherwise 0), works for both direction (\n> > FROM/TO) when file is used (file = t)\n> \n> What value is reported when STDOUT/STDIN is specified in COPY command?\n> \n> \n> For my first patch nothing was reported on STDOUT/STDIN usage. I'll attach new patch soon supporting those as well.\n\nThanks for the patch!\n\nWith the patch, pg_stat_progress_copy seems to report the progress of\nthe processing on file_fdw. Is this intentional?Every action using internally COPY will be included in the progress report view.I have spotted for example pg_dump does that and is reported there as well.I do not see any problem regarding this. For pg_dump it is consistent with \"pg_stat_activity\" reporting COPY command in the query field. \nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 22 Jun 2020 10:21:13 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "po 22. 6. 2020 v 9:15 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n\n> On Sun, Jun 21, 2020 at 5:11 PM Josef Šimánek <josef.simanek@gmail.com>\n> wrote:\n> >\n> > Thanks for all comments. I have updated code to support more options\n> (including STDIN/STDOUT) and added some documentation.\n> >\n> > Patch is attached and can be found also at\n> https://github.com/simi/postgres/pull/5.\n> >\n> > Diff version: https://github.com/simi/postgres/pull/5.diff\n> > Patch version: https://github.com/simi/postgres/pull/5.patch\n> >\n> > I'm also attaching screenshot of HTML documentation and html\n> documentation file.\n> >\n> > I'll do my best to get this to commitfest now.\n> >\n> > ne 14. 6. 2020 v 14:32 odesílatel Josef Šimánek <josef.simanek@gmail.com>\n> napsal:\n> >>\n> >> Hello, as proposed by Pavel Stěhule and discussed on local czech\n> PostgreSQL maillist (\n> https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer),\n> I have prepared an initial patch for COPY command progress reporting.\n> >>\n> >> Few examples first:\n> >>\n> >> \"COPY (SELECT * FROM test) TO '/tmp/ids';\"\n> >>\n> >> yr=# SELECT * from pg_stat_progress_copy;\n> >> pid | datid | datname | relid | direction | file | program |\n> lines_processed | file_bytes_processed\n> >>\n> ---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n> >> 3347126 | 16384 | yr | 0 | TO | t | f |\n> 3529943 | 24906226\n> >> (1 row)\n> >>\n> >> \"COPY test FROM '/tmp/ids';\n> >>\n> >> yr=# SELECT * from pg_stat_progress_copy;\n> >> pid | datid | datname | relid | direction | file | program |\n> lines_processed | file_bytes_processed\n> >>\n> ---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n> >> 3347126 | 16384 | yr | 16385 | FROM | t | f |\n> 121591999 | 957218816\n> >> (1 row)\n> >>\n> >> Columns are inspired by CREATE INDEX progress report system view.\n> >>\n> >> pid - integer - PID of backend\n> >> datid - oid - OID of related database\n> >> datname - name - name of related database (this seems redundant, since\n> oid should be enough, but it is the same in CREATE INDEX)\n> >> relid - oid - oid of table related to COPY command, when not known (for\n> example when copying to file, it is 0)\n> >> direction - text - one of \"FROM\" or \"TO\" depends on command used\n> >> file - bool - is file is used?\n> >> program - bool - is program used?\n> >> lines_processed - bigint - amount of processed lines, works for both\n> directions (FROM/TO)\n> >> file_bytes_processed - amount of bytes processed when file is used\n> (otherwise 0), works for both direction (\n> >> FROM/TO) when file is used (file = t)\n> >>\n> >> Patch is attached and can be found also at\n> https://github.com/simi/postgres/pull/5.\n> >>\n>\n> Few comments:\n> @@ -713,6 +714,8 @@ CopyGetData(CopyState cstate, void *databuf, int\n> minread, int maxread)\n> break;\n> }\n>\n> + CopyUpdateBytesProgress(cstate, bytesread);\n> +\n> return bytesread;\n> }\n>\n> This is actually the read data, actual processing will happen later\n> like in CopyReadLineText, it would be better if\n> CopyUpdateBytesProgress is done later, if not it will give the same\n> value even though it does multiple inserts on the table.\n> lines_processed will keep getting updated but file_bytes_processed\n> will not be updated.\n>\n\nFirst I would like to explain what's reported (or at least I'm trying to\nget reported) at bytes_processed column.\n\nWhen exporting to file it should start at 0 and end up at the actual final\nfile size.\nWhen importing from file, it should do the same. You can check file size\nbefore you start COPY FROM and get actual progress looking at\nbytes_processed.\n\nThis column is just a counter of bytes read from input on COPY FROM or\namount of bytes going through COPY TO.\n\nThanks for the hint regarding \"CopyReadLineText\". I'll take a look.\n\nFor now I have tested those cases:\n\nCREATE TABLE test(id int);\nINSERT INTO test SELECT 1 FROM generate_series(1, 1000000);\nCOPY (SELECT * FROM test) TO '/tmp/ids';\nCOPY test FROM '/tmp/ids';\n\npsql -h /tmp yr -c 'COPY (SELECT 1 from generate_series(1,100000000)) TO\nSTDOUT;' > /tmp/ryba.txt\necho /tmp/ryba.txt | psql -h /tmp yr -c 'COPY test FROM STDIN'\n\nIt is easy to check lines count and bytes count are in sync (since 1 line\nis 2 bytes here - \"1\" and newline character).\nI'll try to check more complex COPY commands to ensure everything is in\nsync.\n\nIf you have any ideas for testing queries, feel free to suggest.\n\n +pg_stat_progress_copy| SELECT s.pid,\n> + s.datid,\n> + d.datname,\n> + s.relid,\n> + CASE s.param1\n> + WHEN 0 THEN 'TO'::text\n> + WHEN 1 THEN 'FROM'::text\n> + ELSE NULL::text\n> + END AS direction,\n> + ((s.param2)::integer)::boolean AS file,\n> + ((s.param3)::integer)::boolean AS program,\n> + s.param4 AS lines_processed,\n> + s.param5 AS file_bytes_processed\n>\n> You could include pg_size_pretty for s.param5 like\n> pg_size_pretty(S.param5) AS bytes_processed, it will be easier for\n> users to understand bytes_processed when the data size increases.\n\n\nI was looking at the rest of reporting views and for me those seem to be\njust basic ones providing just raw data to be used later in custom nice\nfriendly human-readable views built on the client side.\nFor example \"pg_stat_progress_basebackup\" also reports \"backup_streamed\" in\nraw form.\n\nAnyway if you would like to make this view more user-friendly, I can add\nthat. Just ping me.\n\n>\n>\nRegards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\npo 22. 6. 2020 v 9:15 odesílatel vignesh C <vignesh21@gmail.com> napsal:On Sun, Jun 21, 2020 at 5:11 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>\n> Thanks for all comments. I have updated code to support more options (including STDIN/STDOUT) and added some documentation.\n>\n> Patch is attached and can be found also at https://github.com/simi/postgres/pull/5.\n>\n> Diff version: https://github.com/simi/postgres/pull/5.diff\n> Patch version: https://github.com/simi/postgres/pull/5.patch\n>\n> I'm also attaching screenshot of HTML documentation and html documentation file.\n>\n> I'll do my best to get this to commitfest now.\n>\n> ne 14. 6. 2020 v 14:32 odesílatel Josef Šimánek <josef.simanek@gmail.com> napsal:\n>>\n>> Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL maillist (https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer), I have prepared an initial patch for COPY command progress reporting.\n>>\n>> Few examples first:\n>>\n>> \"COPY (SELECT * FROM test) TO '/tmp/ids';\"\n>>\n>> yr=# SELECT * from pg_stat_progress_copy;\n>> pid | datid | datname | relid | direction | file | program | lines_processed | file_bytes_processed\n>> ---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n>> 3347126 | 16384 | yr | 0 | TO | t | f | 3529943 | 24906226\n>> (1 row)\n>>\n>> \"COPY test FROM '/tmp/ids';\n>>\n>> yr=# SELECT * from pg_stat_progress_copy;\n>> pid | datid | datname | relid | direction | file | program | lines_processed | file_bytes_processed\n>> ---------+-------+---------+-------+-----------+------+---------+-----------------+----------------------\n>> 3347126 | 16384 | yr | 16385 | FROM | t | f | 121591999 | 957218816\n>> (1 row)\n>>\n>> Columns are inspired by CREATE INDEX progress report system view.\n>>\n>> pid - integer - PID of backend\n>> datid - oid - OID of related database\n>> datname - name - name of related database (this seems redundant, since oid should be enough, but it is the same in CREATE INDEX)\n>> relid - oid - oid of table related to COPY command, when not known (for example when copying to file, it is 0)\n>> direction - text - one of \"FROM\" or \"TO\" depends on command used\n>> file - bool - is file is used?\n>> program - bool - is program used?\n>> lines_processed - bigint - amount of processed lines, works for both directions (FROM/TO)\n>> file_bytes_processed - amount of bytes processed when file is used (otherwise 0), works for both direction (\n>> FROM/TO) when file is used (file = t)\n>>\n>> Patch is attached and can be found also at https://github.com/simi/postgres/pull/5.\n>>\n\nFew comments:\n@@ -713,6 +714,8 @@ CopyGetData(CopyState cstate, void *databuf, int\nminread, int maxread)\n break;\n }\n\n+ CopyUpdateBytesProgress(cstate, bytesread);\n+\n return bytesread;\n }\n\nThis is actually the read data, actual processing will happen later\nlike in CopyReadLineText, it would be better if\nCopyUpdateBytesProgress is done later, if not it will give the same\nvalue even though it does multiple inserts on the table.\nlines_processed will keep getting updated but file_bytes_processed\nwill not be updated.First I would like to explain what's reported (or at least I'm trying to get reported) at bytes_processed column.When exporting to file it should start at 0 and end up at the actual final file size.When importing from file, it should do the same. You can check file size before you start COPY FROM and get actual progress looking at bytes_processed.This column is just a counter of bytes read from input on COPY FROM or amount of bytes going through COPY TO.Thanks for the hint regarding \"CopyReadLineText\". I'll take a look.For now I have tested those cases:CREATE TABLE test(id int);INSERT INTO test SELECT 1 FROM generate_series(1, 1000000);COPY (SELECT * FROM test) TO '/tmp/ids';COPY test FROM '/tmp/ids';psql -h /tmp yr -c 'COPY (SELECT 1 from generate_series(1,100000000)) TO STDOUT;' > /tmp/ryba.txtecho /tmp/ryba.txt | psql -h /tmp yr -c 'COPY test FROM STDIN'It is easy to check lines count and bytes count are in sync (since 1 line is 2 bytes here - \"1\" and newline character).I'll try to check more complex COPY commands to ensure everything is in sync.If you have any ideas for testing queries, feel free to suggest.\n +pg_stat_progress_copy| SELECT s.pid,\n+ s.datid,\n+ d.datname,\n+ s.relid,\n+ CASE s.param1\n+ WHEN 0 THEN 'TO'::text\n+ WHEN 1 THEN 'FROM'::text\n+ ELSE NULL::text\n+ END AS direction,\n+ ((s.param2)::integer)::boolean AS file,\n+ ((s.param3)::integer)::boolean AS program,\n+ s.param4 AS lines_processed,\n+ s.param5 AS file_bytes_processed\n\nYou could include pg_size_pretty for s.param5 like\npg_size_pretty(S.param5) AS bytes_processed, it will be easier for\nusers to understand bytes_processed when the data size increases.I was looking at the rest of reporting views and for me those seem to be just basic ones providing just raw data to be used later in custom nice friendly human-readable views built on the client side.For example \"pg_stat_progress_basebackup\" also reports \"backup_streamed\" in raw form.Anyway if you would like to make this view more user-friendly, I can add that. Just ping me. \nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 22 Jun 2020 12:57:53 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 01:40:34PM +0200, Josef Šimánek wrote:\n>Thanks for all comments. I have updated code to support more options\n>(including STDIN/STDOUT) and added some documentation.\n>\n>Patch is attached and can be found also at\n>https://github.com/simi/postgres/pull/5.\n>\n>Diff version: https://github.com/simi/postgres/pull/5.diff\n>Patch version: https://github.com/simi/postgres/pull/5.patch\n>\n>I'm also attaching screenshot of HTML documentation and html documentation\n>file.\n>\n>I'll do my best to get this to commitfest now.\n>\n\nI see we're not showing the total number of bytes the COPY is expected\nto process, which makes it hard to estimate how far we actually are.\nClearly there are cases when we really don't know that (exports, import\nfrom stdin/program), but why not to show file size for imports from a\nfile? I'd expect that to be the most common case.\n\nI wonder if it made sense to show some estimates in the other cases. For\nexample when exporting query result, maybe we could show the estimated\nnumber of rows and size? Of course, that's prone to estimation errors\nand it's more a wild idea for the future, I don't expect this patch to\nimplement that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jun 2020 14:14:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "po 22. 6. 2020 v 14:14 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\nnapsal:\n\n> On Sun, Jun 21, 2020 at 01:40:34PM +0200, Josef Šimánek wrote:\n> >Thanks for all comments. I have updated code to support more options\n> >(including STDIN/STDOUT) and added some documentation.\n> >\n> >Patch is attached and can be found also at\n> >https://github.com/simi/postgres/pull/5.\n> >\n> >Diff version: https://github.com/simi/postgres/pull/5.diff\n> >Patch version: https://github.com/simi/postgres/pull/5.patch\n> >\n> >I'm also attaching screenshot of HTML documentation and html documentation\n> >file.\n> >\n> >I'll do my best to get this to commitfest now.\n> >\n>\n> I see we're not showing the total number of bytes the COPY is expected\n> to process, which makes it hard to estimate how far we actually are.\n> Clearly there are cases when we really don't know that (exports, import\n> from stdin/program), but why not to show file size for imports from a\n> file? I'd expect that to be the most common case.\n>\n\nFor COPY FROM file fstat is done and info is available already at\nhttps://github.com/postgres/postgres/blob/fe186b4c200b76a5c0f03379fe8645ed1c70a844/src/backend/commands/copy.c#L1934.\nIt should be easy to update some param (param6 for example) with file size\nand expose it in report view. When not available, this column can be NULL.\n\nWould that be enough?\n\nOn the other side everyone can check file size manually to get total value\nexpected and just compare to reported bytes_processed. Alt. \"wc -l\" can be\nchecked to get amount of lines and check lines_processed column to get\nprogress. Should it check amount of lines and populate another column with\nlines total (using a configured separator) as well? AFAIK that would need\nfull file scan which can be slow for huge files.\n\n\n> I wonder if it made sense to show some estimates in the other cases. For\n> example when exporting query result, maybe we could show the estimated\n> number of rows and size? Of course, that's prone to estimation errors\n> and it's more a wild idea for the future, I don't expect this patch to\n> implement that.\n>\n\nMy plan here was to expose numbers not being currently available and let\nclients get the rest of info on their own.\n\nFor example:\n- for \"COPY (query) TO file\" - EXPLAIN or COUNT variant of query could be\nexecuted before to get the amount of expected rows\n- for \"COPY table FROM file\" - file size or amount of lines in file can be\ninspected first to get amount of expected rows or bytes to be processed\n\nI see the current system view in my patch (and also all other report views\ncurrently available) more as a scaffold to build own tools.\n\nFor example CLI tools can use this to provide some kind of progress.\n\n\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npo 22. 6. 2020 v 14:14 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:On Sun, Jun 21, 2020 at 01:40:34PM +0200, Josef Šimánek wrote:\n>Thanks for all comments. I have updated code to support more options\n>(including STDIN/STDOUT) and added some documentation.\n>\n>Patch is attached and can be found also at\n>https://github.com/simi/postgres/pull/5.\n>\n>Diff version: https://github.com/simi/postgres/pull/5.diff\n>Patch version: https://github.com/simi/postgres/pull/5.patch\n>\n>I'm also attaching screenshot of HTML documentation and html documentation\n>file.\n>\n>I'll do my best to get this to commitfest now.\n>\n\nI see we're not showing the total number of bytes the COPY is expected\nto process, which makes it hard to estimate how far we actually are.\nClearly there are cases when we really don't know that (exports, import\nfrom stdin/program), but why not to show file size for imports from a\nfile? I'd expect that to be the most common case.For COPY FROM file fstat is done and info is available already at https://github.com/postgres/postgres/blob/fe186b4c200b76a5c0f03379fe8645ed1c70a844/src/backend/commands/copy.c#L1934. It should be easy to update some param (param6 for example) with file size and expose it in report view. When not available, this column can be NULL.Would that be enough?On the other side everyone can check file size manually to get total value expected and just compare to reported bytes_processed. Alt. \"wc -l\" can be checked to get amount of lines and check lines_processed column to get progress. Should it check amount of lines and populate another column with lines total (using a configured separator) as well? AFAIK that would need full file scan which can be slow for huge files. \nI wonder if it made sense to show some estimates in the other cases. For\nexample when exporting query result, maybe we could show the estimated\nnumber of rows and size? Of course, that's prone to estimation errors\nand it's more a wild idea for the future, I don't expect this patch to\nimplement that.My plan here was to expose numbers not being currently available and let clients get the rest of info on their own.For example:- for \"COPY (query) TO file\" - EXPLAIN or COUNT variant of query could be executed before to get the amount of expected rows- for \"COPY table FROM file\" - file size or amount of lines in file can be inspected first to get amount of expected rows or bytes to be processedI see the current system view in my patch (and also all other report views currently available) more as a scaffold to build own tools.For example CLI tools can use this to provide some kind of progress. \nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 22 Jun 2020 15:33:00 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 03:33:00PM +0200, Josef Šimánek wrote:\n>po 22. 6. 2020 v 14:14 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>napsal:\n>\n>> On Sun, Jun 21, 2020 at 01:40:34PM +0200, Josef Šimánek wrote:\n>> >Thanks for all comments. I have updated code to support more options\n>> >(including STDIN/STDOUT) and added some documentation.\n>> >\n>> >Patch is attached and can be found also at\n>> >https://github.com/simi/postgres/pull/5.\n>> >\n>> >Diff version: https://github.com/simi/postgres/pull/5.diff\n>> >Patch version: https://github.com/simi/postgres/pull/5.patch\n>> >\n>> >I'm also attaching screenshot of HTML documentation and html documentation\n>> >file.\n>> >\n>> >I'll do my best to get this to commitfest now.\n>> >\n>>\n>> I see we're not showing the total number of bytes the COPY is expected\n>> to process, which makes it hard to estimate how far we actually are.\n>> Clearly there are cases when we really don't know that (exports, import\n>> from stdin/program), but why not to show file size for imports from a\n>> file? I'd expect that to be the most common case.\n>>\n>\n>For COPY FROM file fstat is done and info is available already at\n>https://github.com/postgres/postgres/blob/fe186b4c200b76a5c0f03379fe8645ed1c70a844/src/backend/commands/copy.c#L1934.\n>It should be easy to update some param (param6 for example) with file size\n>and expose it in report view. When not available, this column can be NULL.\n>\n>Would that be enough?\n>\n\nYes, I think that'd be fine. The rows without a file should have NULL,\nbecause we literally don't know what the value is. And 0 is a valid file\nsize, so we can't use it anyway.\n\n>On the other side everyone can check file size manually to get total value\n>expected and just compare to reported bytes_processed. Alt. \"wc -l\" can be\n>checked to get amount of lines and check lines_processed column to get\n>progress. Should it check amount of lines and populate another column with\n>lines total (using a configured separator) as well? AFAIK that would need\n>full file scan which can be slow for huge files.\n>\n\nSure, but the extra `wc -l` is less convenient and you then need to\ncombine that with pg_stat_progress_copy. With the information right in\nthe view, you can do (100.0 * bytes_processed / bytes_total) and you get\nthe progress as a percentage. (I've omitted the NULL handling.)\n\nAs for the number of lines, I certainly don't think we need to scan the\nfile - that'd be far too expensive. What we might do is estimate it as\n\n total_bytes / (processed_bytes / processed_rows)\n\nbut that's something people can easily do on their own. So I don't think\nit needs to be part of the patch, and IMHO bytes_processed / bytes_total\nis a sufficient measure of progress.\n\n>\n>> I wonder if it made sense to show some estimates in the other cases. For\n>> example when exporting query result, maybe we could show the estimated\n>> number of rows and size? Of course, that's prone to estimation errors\n>> and it's more a wild idea for the future, I don't expect this patch to\n>> implement that.\n>>\n>\n>My plan here was to expose numbers not being currently available and let\n>clients get the rest of info on their own.\n>\n>For example:\n>- for \"COPY (query) TO file\" - EXPLAIN or COUNT variant of query could be\n>executed before to get the amount of expected rows\n>- for \"COPY table FROM file\" - file size or amount of lines in file can be\n>inspected first to get amount of expected rows or bytes to be processed\n>\n>I see the current system view in my patch (and also all other report views\n>currently available) more as a scaffold to build own tools.\n>\n>For example CLI tools can use this to provide some kind of progress.\n>\n\nTrue, but I'd advise against putting this into v1 of the patch. Let's\nkeep it simple, get it committed and then maybe improve it later.\n\nSome of these stats (like the estimates from a query) may be quite\nunreliable, so I think it needs more discussion. We might invent\nlines_estimated or something like that, for example.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jun 2020 16:19:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 4:28 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>\n> Thanks for the hint regarding \"CopyReadLineText\". I'll take a look.\n>\n> For now I have tested those cases:\n>\n> CREATE TABLE test(id int);\n> INSERT INTO test SELECT 1 FROM generate_series(1, 1000000);\n> COPY (SELECT * FROM test) TO '/tmp/ids';\n> COPY test FROM '/tmp/ids';\n>\n> psql -h /tmp yr -c 'COPY (SELECT 1 from generate_series(1,100000000)) TO STDOUT;' > /tmp/ryba.txt\n> echo /tmp/ryba.txt | psql -h /tmp yr -c 'COPY test FROM STDIN'\n>\n> It is easy to check lines count and bytes count are in sync (since 1 line is 2 bytes here - \"1\" and newline character).\n> I'll try to check more complex COPY commands to ensure everything is in sync.\n>\n> If you have any ideas for testing queries, feel free to suggest.\n\nFor copy from statement you could attach the session, put a breakpoint\nat CopyReadLineText, execution will hit this breakpoint for every\nrecord it is doing COPY FROM and parallely check if\npg_stat_progress_copy is getting updated correctly. I noticed it was\nshowing the file read size instead of the actual processed bytes.\n\n>> +pg_stat_progress_copy| SELECT s.pid,\n>> + s.datid,\n>> + d.datname,\n>> + s.relid,\n>> + CASE s.param1\n>> + WHEN 0 THEN 'TO'::text\n>> + WHEN 1 THEN 'FROM'::text\n>> + ELSE NULL::text\n>> + END AS direction,\n>> + ((s.param2)::integer)::boolean AS file,\n>> + ((s.param3)::integer)::boolean AS program,\n>> + s.param4 AS lines_processed,\n>> + s.param5 AS file_bytes_processed\n>>\n>> You could include pg_size_pretty for s.param5 like\n>> pg_size_pretty(S.param5) AS bytes_processed, it will be easier for\n>> users to understand bytes_processed when the data size increases.\n>\n>\n> I was looking at the rest of reporting views and for me those seem to be just basic ones providing just raw data to be used later in custom nice friendly human-readable views built on the client side.\n> For example \"pg_stat_progress_basebackup\" also reports \"backup_streamed\" in raw form.\n>\n> Anyway if you would like to make this view more user-friendly, I can add that. Just ping me.\n\nI felt we could add pg_size_pretty to make the view more user friendly.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jun 2020 15:40:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "On Tue, Jun 23, 2020 at 03:40:08PM +0530, vignesh C wrote:\n>On Mon, Jun 22, 2020 at 4:28 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>>\n>> Thanks for the hint regarding \"CopyReadLineText\". I'll take a look.\n>>\n>> For now I have tested those cases:\n>>\n>> CREATE TABLE test(id int);\n>> INSERT INTO test SELECT 1 FROM generate_series(1, 1000000);\n>> COPY (SELECT * FROM test) TO '/tmp/ids';\n>> COPY test FROM '/tmp/ids';\n>>\n>> psql -h /tmp yr -c 'COPY (SELECT 1 from generate_series(1,100000000)) TO STDOUT;' > /tmp/ryba.txt\n>> echo /tmp/ryba.txt | psql -h /tmp yr -c 'COPY test FROM STDIN'\n>>\n>> It is easy to check lines count and bytes count are in sync (since 1 line is 2 bytes here - \"1\" and newline character).\n>> I'll try to check more complex COPY commands to ensure everything is in sync.\n>>\n>> If you have any ideas for testing queries, feel free to suggest.\n>\n>For copy from statement you could attach the session, put a breakpoint\n>at CopyReadLineText, execution will hit this breakpoint for every\n>record it is doing COPY FROM and parallely check if\n>pg_stat_progress_copy is getting updated correctly. I noticed it was\n>showing the file read size instead of the actual processed bytes.\n>\n>>> +pg_stat_progress_copy| SELECT s.pid,\n>>> + s.datid,\n>>> + d.datname,\n>>> + s.relid,\n>>> + CASE s.param1\n>>> + WHEN 0 THEN 'TO'::text\n>>> + WHEN 1 THEN 'FROM'::text\n>>> + ELSE NULL::text\n>>> + END AS direction,\n>>> + ((s.param2)::integer)::boolean AS file,\n>>> + ((s.param3)::integer)::boolean AS program,\n>>> + s.param4 AS lines_processed,\n>>> + s.param5 AS file_bytes_processed\n>>>\n>>> You could include pg_size_pretty for s.param5 like\n>>> pg_size_pretty(S.param5) AS bytes_processed, it will be easier for\n>>> users to understand bytes_processed when the data size increases.\n>>\n>>\n>> I was looking at the rest of reporting views and for me those seem to be just basic ones providing just raw data to be used later in custom nice friendly human-readable views built on the client side.\n>> For example \"pg_stat_progress_basebackup\" also reports \"backup_streamed\" in raw form.\n>>\n>> Anyway if you would like to make this view more user-friendly, I can add that. Just ping me.\n>\n>I felt we could add pg_size_pretty to make the view more user friendly.\n>\n\nPlease no. That'd make processing of the data (say, computing progress\nas processed/total) impossible. It's easy to add pg_size_pretty if you\nwant it, it's impossible to undo it. I don't see a single pg_size_pretty\ncall in system_views.sql.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jun 2020 13:15:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "út 23. 6. 2020 v 13:15 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\nnapsal:\n\n> On Tue, Jun 23, 2020 at 03:40:08PM +0530, vignesh C wrote:\n> >On Mon, Jun 22, 2020 at 4:28 PM Josef Šimánek <josef.simanek@gmail.com>\n> wrote:\n> >>\n> >> Thanks for the hint regarding \"CopyReadLineText\". I'll take a look.\n> >>\n> >> For now I have tested those cases:\n> >>\n> >> CREATE TABLE test(id int);\n> >> INSERT INTO test SELECT 1 FROM generate_series(1, 1000000);\n> >> COPY (SELECT * FROM test) TO '/tmp/ids';\n> >> COPY test FROM '/tmp/ids';\n> >>\n> >> psql -h /tmp yr -c 'COPY (SELECT 1 from generate_series(1,100000000))\n> TO STDOUT;' > /tmp/ryba.txt\n> >> echo /tmp/ryba.txt | psql -h /tmp yr -c 'COPY test FROM STDIN'\n> >>\n> >> It is easy to check lines count and bytes count are in sync (since 1\n> line is 2 bytes here - \"1\" and newline character).\n> >> I'll try to check more complex COPY commands to ensure everything is in\n> sync.\n> >>\n> >> If you have any ideas for testing queries, feel free to suggest.\n> >\n> >For copy from statement you could attach the session, put a breakpoint\n> >at CopyReadLineText, execution will hit this breakpoint for every\n> >record it is doing COPY FROM and parallely check if\n> >pg_stat_progress_copy is getting updated correctly. I noticed it was\n> >showing the file read size instead of the actual processed bytes.\n> >\n> >>> +pg_stat_progress_copy| SELECT s.pid,\n> >>> + s.datid,\n> >>> + d.datname,\n> >>> + s.relid,\n> >>> + CASE s.param1\n> >>> + WHEN 0 THEN 'TO'::text\n> >>> + WHEN 1 THEN 'FROM'::text\n> >>> + ELSE NULL::text\n> >>> + END AS direction,\n> >>> + ((s.param2)::integer)::boolean AS file,\n> >>> + ((s.param3)::integer)::boolean AS program,\n> >>> + s.param4 AS lines_processed,\n> >>> + s.param5 AS file_bytes_processed\n> >>>\n> >>> You could include pg_size_pretty for s.param5 like\n> >>> pg_size_pretty(S.param5) AS bytes_processed, it will be easier for\n> >>> users to understand bytes_processed when the data size increases.\n> >>\n> >>\n> >> I was looking at the rest of reporting views and for me those seem to\n> be just basic ones providing just raw data to be used later in custom nice\n> friendly human-readable views built on the client side.\n> >> For example \"pg_stat_progress_basebackup\" also reports\n> \"backup_streamed\" in raw form.\n> >>\n> >> Anyway if you would like to make this view more user-friendly, I can\n> add that. Just ping me.\n> >\n> >I felt we could add pg_size_pretty to make the view more user friendly.\n> >\n>\n> Please no. That'd make processing of the data (say, computing progress\n> as processed/total) impossible. It's easy to add pg_size_pretty if you\n> want it, it's impossible to undo it. I don't see a single pg_size_pretty\n> call in system_views.sql.\n>\n\n+1, *_pretty functions should be used on the client side only. Server side\n(source) should be in raw format.\n\nRegards\n\nPavel\n\n\n\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nút 23. 6. 2020 v 13:15 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:On Tue, Jun 23, 2020 at 03:40:08PM +0530, vignesh C wrote:\n>On Mon, Jun 22, 2020 at 4:28 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>>\n>> Thanks for the hint regarding \"CopyReadLineText\". I'll take a look.\n>>\n>> For now I have tested those cases:\n>>\n>> CREATE TABLE test(id int);\n>> INSERT INTO test SELECT 1 FROM generate_series(1, 1000000);\n>> COPY (SELECT * FROM test) TO '/tmp/ids';\n>> COPY test FROM '/tmp/ids';\n>>\n>> psql -h /tmp yr -c 'COPY (SELECT 1 from generate_series(1,100000000)) TO STDOUT;' > /tmp/ryba.txt\n>> echo /tmp/ryba.txt | psql -h /tmp yr -c 'COPY test FROM STDIN'\n>>\n>> It is easy to check lines count and bytes count are in sync (since 1 line is 2 bytes here - \"1\" and newline character).\n>> I'll try to check more complex COPY commands to ensure everything is in sync.\n>>\n>> If you have any ideas for testing queries, feel free to suggest.\n>\n>For copy from statement you could attach the session, put a breakpoint\n>at CopyReadLineText, execution will hit this breakpoint for every\n>record it is doing COPY FROM and parallely check if\n>pg_stat_progress_copy is getting updated correctly. I noticed it was\n>showing the file read size instead of the actual processed bytes.\n>\n>>> +pg_stat_progress_copy| SELECT s.pid,\n>>> + s.datid,\n>>> + d.datname,\n>>> + s.relid,\n>>> + CASE s.param1\n>>> + WHEN 0 THEN 'TO'::text\n>>> + WHEN 1 THEN 'FROM'::text\n>>> + ELSE NULL::text\n>>> + END AS direction,\n>>> + ((s.param2)::integer)::boolean AS file,\n>>> + ((s.param3)::integer)::boolean AS program,\n>>> + s.param4 AS lines_processed,\n>>> + s.param5 AS file_bytes_processed\n>>>\n>>> You could include pg_size_pretty for s.param5 like\n>>> pg_size_pretty(S.param5) AS bytes_processed, it will be easier for\n>>> users to understand bytes_processed when the data size increases.\n>>\n>>\n>> I was looking at the rest of reporting views and for me those seem to be just basic ones providing just raw data to be used later in custom nice friendly human-readable views built on the client side.\n>> For example \"pg_stat_progress_basebackup\" also reports \"backup_streamed\" in raw form.\n>>\n>> Anyway if you would like to make this view more user-friendly, I can add that. Just ping me.\n>\n>I felt we could add pg_size_pretty to make the view more user friendly.\n>\n\nPlease no. That'd make processing of the data (say, computing progress\nas processed/total) impossible. It's easy to add pg_size_pretty if you\nwant it, it's impossible to undo it. I don't see a single pg_size_pretty\ncall in system_views.sql.+1, *_pretty functions should be used on the client side only. Server side (source) should be in raw format.RegardsPavel \n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 23 Jun 2020 14:02:45 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "út 23. 6. 2020 v 13:15 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\nnapsal:\n\n> On Tue, Jun 23, 2020 at 03:40:08PM +0530, vignesh C wrote:\n> >On Mon, Jun 22, 2020 at 4:28 PM Josef Šimánek <josef.simanek@gmail.com>\n> wrote:\n> >>\n> >> Thanks for the hint regarding \"CopyReadLineText\". I'll take a look.\n> >>\n> >> For now I have tested those cases:\n> >>\n> >> CREATE TABLE test(id int);\n> >> INSERT INTO test SELECT 1 FROM generate_series(1, 1000000);\n> >> COPY (SELECT * FROM test) TO '/tmp/ids';\n> >> COPY test FROM '/tmp/ids';\n> >>\n> >> psql -h /tmp yr -c 'COPY (SELECT 1 from generate_series(1,100000000))\n> TO STDOUT;' > /tmp/ryba.txt\n> >> echo /tmp/ryba.txt | psql -h /tmp yr -c 'COPY test FROM STDIN'\n> >>\n> >> It is easy to check lines count and bytes count are in sync (since 1\n> line is 2 bytes here - \"1\" and newline character).\n> >> I'll try to check more complex COPY commands to ensure everything is in\n> sync.\n> >>\n> >> If you have any ideas for testing queries, feel free to suggest.\n> >\n> >For copy from statement you could attach the session, put a breakpoint\n> >at CopyReadLineText, execution will hit this breakpoint for every\n> >record it is doing COPY FROM and parallely check if\n> >pg_stat_progress_copy is getting updated correctly. I noticed it was\n> >showing the file read size instead of the actual processed bytes.\n> >\n> >>> +pg_stat_progress_copy| SELECT s.pid,\n> >>> + s.datid,\n> >>> + d.datname,\n> >>> + s.relid,\n> >>> + CASE s.param1\n> >>> + WHEN 0 THEN 'TO'::text\n> >>> + WHEN 1 THEN 'FROM'::text\n> >>> + ELSE NULL::text\n> >>> + END AS direction,\n> >>> + ((s.param2)::integer)::boolean AS file,\n> >>> + ((s.param3)::integer)::boolean AS program,\n> >>> + s.param4 AS lines_processed,\n> >>> + s.param5 AS file_bytes_processed\n> >>>\n> >>> You could include pg_size_pretty for s.param5 like\n> >>> pg_size_pretty(S.param5) AS bytes_processed, it will be easier for\n> >>> users to understand bytes_processed when the data size increases.\n> >>\n> >>\n> >> I was looking at the rest of reporting views and for me those seem to\n> be just basic ones providing just raw data to be used later in custom nice\n> friendly human-readable views built on the client side.\n> >> For example \"pg_stat_progress_basebackup\" also reports\n> \"backup_streamed\" in raw form.\n> >>\n> >> Anyway if you would like to make this view more user-friendly, I can\n> add that. Just ping me.\n> >\n> >I felt we could add pg_size_pretty to make the view more user friendly.\n> >\n>\n> Please no. That'd make processing of the data (say, computing progress\n> as processed/total) impossible. It's easy to add pg_size_pretty if you\n> want it, it's impossible to undo it. I don't see a single pg_size_pretty\n> call in system_views.sql.\n>\n>\nI think the same.\n\n\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nút 23. 6. 2020 v 13:15 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:On Tue, Jun 23, 2020 at 03:40:08PM +0530, vignesh C wrote:\n>On Mon, Jun 22, 2020 at 4:28 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>>\n>> Thanks for the hint regarding \"CopyReadLineText\". I'll take a look.\n>>\n>> For now I have tested those cases:\n>>\n>> CREATE TABLE test(id int);\n>> INSERT INTO test SELECT 1 FROM generate_series(1, 1000000);\n>> COPY (SELECT * FROM test) TO '/tmp/ids';\n>> COPY test FROM '/tmp/ids';\n>>\n>> psql -h /tmp yr -c 'COPY (SELECT 1 from generate_series(1,100000000)) TO STDOUT;' > /tmp/ryba.txt\n>> echo /tmp/ryba.txt | psql -h /tmp yr -c 'COPY test FROM STDIN'\n>>\n>> It is easy to check lines count and bytes count are in sync (since 1 line is 2 bytes here - \"1\" and newline character).\n>> I'll try to check more complex COPY commands to ensure everything is in sync.\n>>\n>> If you have any ideas for testing queries, feel free to suggest.\n>\n>For copy from statement you could attach the session, put a breakpoint\n>at CopyReadLineText, execution will hit this breakpoint for every\n>record it is doing COPY FROM and parallely check if\n>pg_stat_progress_copy is getting updated correctly. I noticed it was\n>showing the file read size instead of the actual processed bytes.\n>\n>>> +pg_stat_progress_copy| SELECT s.pid,\n>>> + s.datid,\n>>> + d.datname,\n>>> + s.relid,\n>>> + CASE s.param1\n>>> + WHEN 0 THEN 'TO'::text\n>>> + WHEN 1 THEN 'FROM'::text\n>>> + ELSE NULL::text\n>>> + END AS direction,\n>>> + ((s.param2)::integer)::boolean AS file,\n>>> + ((s.param3)::integer)::boolean AS program,\n>>> + s.param4 AS lines_processed,\n>>> + s.param5 AS file_bytes_processed\n>>>\n>>> You could include pg_size_pretty for s.param5 like\n>>> pg_size_pretty(S.param5) AS bytes_processed, it will be easier for\n>>> users to understand bytes_processed when the data size increases.\n>>\n>>\n>> I was looking at the rest of reporting views and for me those seem to be just basic ones providing just raw data to be used later in custom nice friendly human-readable views built on the client side.\n>> For example \"pg_stat_progress_basebackup\" also reports \"backup_streamed\" in raw form.\n>>\n>> Anyway if you would like to make this view more user-friendly, I can add that. Just ping me.\n>\n>I felt we could add pg_size_pretty to make the view more user friendly.\n>\n\nPlease no. That'd make processing of the data (say, computing progress\nas processed/total) impossible. It's easy to add pg_size_pretty if you\nwant it, it's impossible to undo it. I don't see a single pg_size_pretty\ncall in system_views.sql.I think the same. \nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 23 Jun 2020 14:22:07 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "> po 15. 6. 2020 v 7:34 odesílatel Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> napsal:\n>>\n>> > I'm using ftell to get current position in file to populate file_bytes_processed without error handling (ftell can return -1L and also populate errno on problems).\n>> >\n>> > 1. Is that a good way to get progress of file processing?\n>>\n>> IMO, it's better to handle the error cases. One possible case where\n>> ftell can return -1 and set errno is when the total bytes processed is\n>> more than LONG_MAX.\n>>\n>> Will your patch handle file_bytes_processed reporting for COPY FROM\n>> STDIN cases? For this case, ftell can't be used.\n>>\n>> Instead of using ftell and worrying about the errors, a simple\n>> approach could be to have a uint64 variable in CopyStateData to track\n>> the number of bytes read whenever CopyGetData is called. This approach\n>> can also handle the case of COPY FROM STDIN.\n>\n>\n> Thanks for suggestion. I used this approach and latest patch supports both STDIN and STDOUT now.\n>\n\nThanks.\n\nIt would be good to see the performance of the copy command(probably\nwith a few GBs of data) with patch and without patch for both csv/text\nand binary files.\n\nFor copy from command CopyGetData gets called for every\nRAW_BUF_SIZE(64KB) and so is CopyUpdateBytesProgress function, but for\nbinary format files, CopyGetData gets called for each field/column for\nall rows/lines/tuples.\n\nCan we make CopyUpdateBytesProgress() a macro or an inline\nfunction(probably by using pg_attribute_always_inline) to reduce\nfunction call overhead as it just handles two statements?\n\nI tried to apply the patch on commit #\n7ce461560159948ba0c802c767e42c5f5ae08b4a, seems like a warning.\n\nbharath:postgres$ git apply /mnt/hgfs/Downloads/copy-progress-v2.diff\n/mnt/hgfs/Downloads/copy-progress-v2.diff:277: trailing whitespace.\n * for counting tuples inserted by an INSERT\ncommand. Update\nwarning: 1 line adds whitespace errors.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jun 2020 18:22:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "\n\nOn 2020/06/22 17:21, Josef Šimánek wrote:\n> \n> \n> po 22. 6. 2020 v 4:48 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n> On 2020/06/21 20:33, Josef Šimánek wrote:\n> >\n> >\n> > po 15. 6. 2020 v 6:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n> >\n> >\n> >\n> > On 2020/06/14 21:32, Josef Šimánek wrote:\n> > > Hello, as proposed by Pavel Stěhule and discussed on local czech PostgreSQL maillist (https://groups.google.com/d/msgid/postgresql-cz/CAFj8pRCZ42CBCa1bPHr7htffSV%2BNAcgcHHG0dVqOog4bsu2LFw%40mail.gmail.com?utm_medium=email&utm_source=footer), I have prepared an initial patch for COPY command progress reporting.\n> >\n> > Sounds nice!\n> >\n> >\n> > > file - bool - is file is used?\n> > > program - bool - is program used?\n> >\n> > Are these fields really necessary in a progress view?\n> > What values are reported when STDOUT/STDIN is specified in COPY command?\n> >\n> >\n> > For STDOUT and STDIN file is true and program is false.\n> \n> Could you tell me why these columns are necessary in *progress* view?\n> If we want to see what copy command is actually running, we can see\n> pg_stat_activity, instead. For example,\n> \n> SELECT pc.*, a.query FROM pg_stat_progress_copy pc, pg_stat_activity a WHERE pc.pid = a.pid;\n> \n> If that doesn't make any sense, I can remove those. I have not strong opinion about those values. Those were just around when I was looking for possible values to include in the progress report.\n\nI vote not to expose them. *If* we expose them, we should also\nexpose the options in pg_stat_progress_xxx views, for example,\nthe options for BASE_BACKUP command in pg_stat_progress_basebackup,\nfor the consistency. But I don't think that makes sense.\n\n> \n> >\n> > > file_bytes_processed - amount of bytes processed when file is used (otherwise 0), works for both direction (\n> > > FROM/TO) when file is used (file = t)\n> >\n> > What value is reported when STDOUT/STDIN is specified in COPY command?\n> >\n> >\n> > For my first patch nothing was reported on STDOUT/STDIN usage. I'll attach new patch soon supporting those as well.\n> \n> Thanks for the patch!\n> \n> With the patch, pg_stat_progress_copy seems to report the progress of\n> the processing on file_fdw. Is this intentional?\n> \n> \n> Every action using internally COPY will be included in the progress report view.\n> I have spotted for example pg_dump does that and is reported there as well.\n> I do not see any problem regarding this. For pg_dump it is consistent with \"pg_stat_activity\" reporting COPY command in the query field.\n\nSo it's better to add this kind of information into the docs?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 24 Jun 2020 02:57:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "\n\nOn 2020/06/22 21:14, Tomas Vondra wrote:\n> On Sun, Jun 21, 2020 at 01:40:34PM +0200, Josef Šimánek wrote:\n>> Thanks for all comments. I have updated code to support more options\n>> (including STDIN/STDOUT) and added some documentation.\n>>\n>> Patch is attached and can be found also at\n>> https://github.com/simi/postgres/pull/5.\n>>\n>> Diff version: https://github.com/simi/postgres/pull/5.diff\n>> Patch version: https://github.com/simi/postgres/pull/5.patch\n>>\n>> I'm also attaching screenshot of HTML documentation and html documentation\n>> file.\n>>\n>> I'll do my best to get this to commitfest now.\n>>\n> \n> I see we're not showing the total number of bytes the COPY is expected\n> to process, which makes it hard to estimate how far we actually are.\n> Clearly there are cases when we really don't know that (exports, import\n> from stdin/program), but why not to show file size for imports from a\n> file? I'd expect that to be the most common case.\n\n+1\n\nI like using \\copy psql meta command. So I feel better if the total size\nis reported even when using \\copy (i.e., COPY STDIN).\n\nAs just idea, what about adding new option into COPY command,\nallowing users (including \\copy command) to specify the estimated size\nof input file in that option, and making pg_stat_progress_copy view\ndisplay it as the total size? If we implement this mechanism, we can\nchange \\copy command so that it calculate the actual size of input file\nand specify it in that option.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 24 Jun 2020 03:17:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "On Tue, Jun 23, 2020 at 4:45 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> >>\n> >> Anyway if you would like to make this view more user-friendly, I can add that. Just ping me.\n> >\n> >I felt we could add pg_size_pretty to make the view more user friendly.\n> >\n>\n> Please no. That'd make processing of the data (say, computing progress\n> as processed/total) impossible. It's easy to add pg_size_pretty if you\n> want it, it's impossible to undo it. I don't see a single pg_size_pretty\n> call in system_views.sql.\n>\n\nI thought of including pg_size_pretty as we there was no total_bytes\nto compare with, but I'm ok without it too as there is an option for\nuser to always include it in the client side like \"SELECT\npg_size_pretty(file_bytes_processed) from pg_stat_progress_copy;\" if\nrequired.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jun 2020 06:35:22 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "The automated patchtester for the Commitfest gets confused when there are two\nversions of the same changeset attached to the email, as it tries to apply them\nboth which obviously results in an application failure. I've attached just the\npreviously submitted patch version to this email to see if we can get a test\nrun of it.\n\ncheers ./daniel",
"msg_date": "Thu, 2 Jul 2020 14:25:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "čt 2. 7. 2020 v 14:25 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n\n> The automated patchtester for the Commitfest gets confused when there are\n> two\n> versions of the same changeset attached to the email, as it tries to apply\n> them\n> both which obviously results in an application failure. I've attached\n> just the\n> previously submitted patch version to this email to see if we can get a\n> test\n> run of it.\n>\n\nThanks, I'm new to commitfest and I was confused as well. I tried to\nreattach the thread there. I'll prepare a new patch soon, what should I do?\nJust attach it again?\n\n\n> cheers ./daniel\n>\n>\n\nčt 2. 7. 2020 v 14:25 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:The automated patchtester for the Commitfest gets confused when there are two\nversions of the same changeset attached to the email, as it tries to apply them\nboth which obviously results in an application failure. I've attached just the\npreviously submitted patch version to this email to see if we can get a test\nrun of it.Thanks, I'm new to commitfest and I was confused as well. I tried to reattach the thread there. I'll prepare a new patch soon, what should I do? Just attach it again? \ncheers ./daniel",
"msg_date": "Thu, 2 Jul 2020 14:42:16 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "> On 2 Jul 2020, at 14:42, Josef Šimánek <josef.simanek@gmail.com> wrote:\n> čt 2. 7. 2020 v 14:25 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n\n> The automated patchtester for the Commitfest gets confused when there are two\n> versions of the same changeset attached to the email, as it tries to apply them\n> both which obviously results in an application failure. I've attached just the\n> previously submitted patch version to this email to see if we can get a test\n> run of it.\n> \n> Thanks, I'm new to commitfest and I was confused as well.\n\nNo worries, we're here to help.\n\n> I tried to reattach the thread there. I'll prepare a new patch soon, what should I do? Just attach it again?\n\nCorrect, just reply to the thread with a new version of the patch attached, and\nit'll get picked up automatically. No need to do anything more.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 2 Jul 2020 14:51:40 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "\n\nOn 2020/07/02 21:51, Daniel Gustafsson wrote:\n>> On 2 Jul 2020, at 14:42, Josef Šimánek <josef.simanek@gmail.com> wrote:\n>> čt 2. 7. 2020 v 14:25 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n> \n>> The automated patchtester for the Commitfest gets confused when there are two\n>> versions of the same changeset attached to the email, as it tries to apply them\n>> both which obviously results in an application failure. I've attached just the\n>> previously submitted patch version to this email to see if we can get a test\n>> run of it.\n>>\n>> Thanks, I'm new to commitfest and I was confused as well.\n> \n> No worries, we're here to help.\n> \n>> I tried to reattach the thread there. I'll prepare a new patch soon, what should I do? Just attach it again?\n\nNew patch has not been sent yet.\nSo I marked this as \"Waiting on Author\" at Commit Fest.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 29 Jul 2020 02:00:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "Thanks for the info. I am waiting for review. Is there any summary of\nrequested changes needed?\n\nDne út 28. 7. 2020 19:00 uživatel Fujii Masao <masao.fujii@oss.nttdata.com>\nnapsal:\n\n>\n>\n> On 2020/07/02 21:51, Daniel Gustafsson wrote:\n> >> On 2 Jul 2020, at 14:42, Josef Šimánek <josef.simanek@gmail.com> wrote:\n> >> čt 2. 7. 2020 v 14:25 odesílatel Daniel Gustafsson <daniel@yesql.se\n> <mailto:daniel@yesql.se>> napsal:\n> >\n> >> The automated patchtester for the Commitfest gets confused when there\n> are two\n> >> versions of the same changeset attached to the email, as it tries to\n> apply them\n> >> both which obviously results in an application failure. I've attached\n> just the\n> >> previously submitted patch version to this email to see if we can get a\n> test\n> >> run of it.\n> >>\n> >> Thanks, I'm new to commitfest and I was confused as well.\n> >\n> > No worries, we're here to help.\n> >\n> >> I tried to reattach the thread there. I'll prepare a new patch soon,\n> what should I do? Just attach it again?\n>\n> New patch has not been sent yet.\n> So I marked this as \"Waiting on Author\" at Commit Fest.\n>\n> Regards,\n>\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\nThanks for the info. I am waiting for review. Is there any summary of requested changes needed?Dne út 28. 7. 2020 19:00 uživatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n\nOn 2020/07/02 21:51, Daniel Gustafsson wrote:\n>> On 2 Jul 2020, at 14:42, Josef Šimánek <josef.simanek@gmail.com> wrote:\n>> čt 2. 7. 2020 v 14:25 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n> \n>> The automated patchtester for the Commitfest gets confused when there are two\n>> versions of the same changeset attached to the email, as it tries to apply them\n>> both which obviously results in an application failure. I've attached just the\n>> previously submitted patch version to this email to see if we can get a test\n>> run of it.\n>>\n>> Thanks, I'm new to commitfest and I was confused as well.\n> \n> No worries, we're here to help.\n> \n>> I tried to reattach the thread there. I'll prepare a new patch soon, what should I do? Just attach it again?\n\nNew patch has not been sent yet.\nSo I marked this as \"Waiting on Author\" at Commit Fest.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 28 Jul 2020 20:24:57 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "út 28. 7. 2020 v 20:25 odesílatel Josef Šimánek <josef.simanek@gmail.com>\nnapsal:\n\n> Thanks for the info. I am waiting for review. Is there any summary of\n> requested changes needed?\n>\n\nMaybe it is just noise - you wrote so you will resend a patch to different\nthread\n\n>\n>> I tried to reattach the thread there. I'll prepare a new patch soon,\nwhat should I do? Just attach it again?\n\nRegards\n\nPavel\n\n\n> Dne út 28. 7. 2020 19:00 uživatel Fujii Masao <masao.fujii@oss.nttdata.com>\n> napsal:\n>\n>>\n>>\n>> On 2020/07/02 21:51, Daniel Gustafsson wrote:\n>> >> On 2 Jul 2020, at 14:42, Josef Šimánek <josef.simanek@gmail.com>\n>> wrote:\n>> >> čt 2. 7. 2020 v 14:25 odesílatel Daniel Gustafsson <daniel@yesql.se\n>> <mailto:daniel@yesql.se>> napsal:\n>> >\n>> >> The automated patchtester for the Commitfest gets confused when there\n>> are two\n>> >> versions of the same changeset attached to the email, as it tries to\n>> apply them\n>> >> both which obviously results in an application failure. I've attached\n>> just the\n>> >> previously submitted patch version to this email to see if we can get\n>> a test\n>> >> run of it.\n>> >>\n>> >> Thanks, I'm new to commitfest and I was confused as well.\n>> >\n>> > No worries, we're here to help.\n>> >\n>> >> I tried to reattach the thread there. I'll prepare a new patch soon,\n>> what should I do? Just attach it again?\n>>\n>> New patch has not been sent yet.\n>> So I marked this as \"Waiting on Author\" at Commit Fest.\n>>\n>> Regards,\n>>\n>>\n>> --\n>> Fujii Masao\n>> Advanced Computing Technology Center\n>> Research and Development Headquarters\n>> NTT DATA CORPORATION\n>>\n>\n\nút 28. 7. 2020 v 20:25 odesílatel Josef Šimánek <josef.simanek@gmail.com> napsal:Thanks for the info. I am waiting for review. Is there any summary of requested changes needed?Maybe it is just noise - you wrote so you will resend a patch to different thread> \n>> I tried to reattach the thread there. I'll prepare a new patch soon, what should I do? Just attach it again?RegardsPavel Dne út 28. 7. 2020 19:00 uživatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n\nOn 2020/07/02 21:51, Daniel Gustafsson wrote:\n>> On 2 Jul 2020, at 14:42, Josef Šimánek <josef.simanek@gmail.com> wrote:\n>> čt 2. 7. 2020 v 14:25 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n> \n>> The automated patchtester for the Commitfest gets confused when there are two\n>> versions of the same changeset attached to the email, as it tries to apply them\n>> both which obviously results in an application failure. I've attached just the\n>> previously submitted patch version to this email to see if we can get a test\n>> run of it.\n>>\n>> Thanks, I'm new to commitfest and I was confused as well.\n> \n> No worries, we're here to help.\n> \n>> I tried to reattach the thread there. I'll prepare a new patch soon, what should I do? Just attach it again?\n\nNew patch has not been sent yet.\nSo I marked this as \"Waiting on Author\" at Commit Fest.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 28 Jul 2020 20:30:22 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "\n\nOn 2020/07/29 3:30, Pavel Stehule wrote:\n> \n> \n> út 28. 7. 2020 v 20:25 odesílatel Josef Šimánek <josef.simanek@gmail.com <mailto:josef.simanek@gmail.com>> napsal:\n> \n> Thanks for the info. I am waiting for review. Is there any summary of requested changes needed?\n> \n> \n> Maybe it is just noise - you wrote so you will resend a patch to different thread\n> \n>> \n>>> I tried to reattach the thread there. I'll prepare a new patch soon, what should I do? Just attach it again?\n\nYeah, since I read this message, I was thinking that new patch will be\nposted. But, Josef, if the situation was changed, could you correct me?\nAnyway the patch seems not to be applied cleanly, so at least it needs to\nbe updated to address that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 30 Jul 2020 08:51:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "On Thu, Jul 30, 2020 at 08:51:36AM +0900, Fujii Masao wrote:\n> Yeah, since I read this message, I was thinking that new patch will be\n> posted. But, Josef, if the situation was changed, could you correct me?\n> Anyway the patch seems not to be applied cleanly, so at least it needs to\n> be updated to address that.\n\nJosef, the patch has been waiting on author for a bit more than one\nmonth, so could you send a rebased version please? It looks that you\nare still a bit confused by the commit fest process, and as a first\nstep we need a clean version to be able to review it. This would also\nallow the commit fest bot to check it at http://commitfest.cputube.org/.\n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 13:13:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "On Mon, Sep 07, 2020 at 01:13:10PM +0900, Michael Paquier wrote:\n> Josef, the patch has been waiting on author for a bit more than one\n> month, so could you send a rebased version please? It looks that you\n> are still a bit confused by the commit fest process, and as a first\n> step we need a clean version to be able to review it. This would also\n> allow the commit fest bot to check it at http://commitfest.cputube.org/.\n\nThis feature has some appeal, but there is no activity lately, so I am\nmarking it as returned with feedback. Please feel free to send a new\npatch once you have time to do so, and I would recommend to register a\nnew entry in the commit fest app when done.\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 14:09:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
},
{
"msg_contents": "čt 17. 9. 2020 v 7:09 odesílatel Michael Paquier <michael@paquier.xyz> napsal:\n>\n> On Mon, Sep 07, 2020 at 01:13:10PM +0900, Michael Paquier wrote:\n> > Josef, the patch has been waiting on author for a bit more than one\n> > month, so could you send a rebased version please? It looks that you\n> > are still a bit confused by the commit fest process, and as a first\n> > step we need a clean version to be able to review it. This would also\n> > allow the commit fest bot to check it at http://commitfest.cputube.org/.\n>\n> This feature has some appeal, but there is no activity lately, so I am\n> marking it as returned with feedback. Please feel free to send a new\n> patch once you have time to do so, and I would recommend to register a\n> new entry in the commit fest app when done.\n\nThanks for info. I hope I'll be able to revisit this patch soon,\nrebase and post again. I'm still interested in this.\n\n> --\n> Michael\n\n\n",
"msg_date": "Thu, 17 Sep 2020 17:08:46 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Initial progress reporting for COPY command"
}
] |
[
{
"msg_contents": "Hi,\n\nPer the docs, pg_replication_slots.min_safe_lsn inedicates \"the minimum\nLSN currently available for walsenders\". When I executed pg_walfile_name()\nwith min_safe_lsn, the function returned the name of the last removed\nWAL file instead of minimum available WAL file name. This happens because\nmin_safe_lsn actually indicates the ending position (the boundary byte)\nof the last removed WAL file.\n\nI guess that some users would want to calculate the minimum available\nWAL file name from min_safe_lsn by using pg_walfile_name(), but the result\nwould be incorrect. Isn't this confusing? min_safe_lsn should indicate\nthe bondary byte + 1, instead?\n\nBTW, I just wonder why each row in pg_replication_slots needs to have\nmin_safe_lsn column? Basically min_safe_lsn should be the same between\nevery replication slots.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 15 Jun 2020 12:40:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 12:40:03PM +0900, Fujii Masao wrote:\n> BTW, I just wonder why each row in pg_replication_slots needs to have\n> min_safe_lsn column? Basically min_safe_lsn should be the same between\n> every replication slots.\n\nIndeed, that's confusing in its current shape. I would buy putting\nthis value into pg_replication_slots if there were some consistency of\nthe data to worry about because of locking issues, but here this data\nis controlled within info_lck, which is out of the the repslot LW\nlock. So I think that it is incorrect to put this data in this view\nand that we should remove it, and that instead we had better push for\na system view that maps with the contents of XLogCtl.\n--\nMichael",
"msg_date": "Mon, 15 Jun 2020 13:44:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Mon, 15 Jun 2020 13:44:31 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Jun 15, 2020 at 12:40:03PM +0900, Fujii Masao wrote:\n> > BTW, I just wonder why each row in pg_replication_slots needs to have\n> > min_safe_lsn column? Basically min_safe_lsn should be the same between\n> > every replication slots.\n> \n> Indeed, that's confusing in its current shape. I would buy putting\n> this value into pg_replication_slots if there were some consistency of\n> the data to worry about because of locking issues, but here this data\n> is controlled within info_lck, which is out of the the repslot LW\n> lock. So I think that it is incorrect to put this data in this view\n> and that we should remove it, and that instead we had better push for\n> a system view that maps with the contents of XLogCtl.\n\nIt was once the difference from the safe_lsn to restart_lsn which is\ndistinct among slots. Then it was changed to the safe_lsn. I agree to\nthe discussion above, but it is needed anywhere since no one can know\nthe margin until the slot goes to the \"lost\" state without it. (Note\nthat currently even wal_status and min_safe_lsn can be inconsistent in\na line.)\n\nJust for the need for table-consistency and in-line consistency, we\ncould just remember the result of XLogGetLastRemovedSegno() around\ntaking info_lock in the function. That doesn't make a practical\ndifference but makes the view look consistent.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 15 Jun 2020 16:35:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/15 16:35, Kyotaro Horiguchi wrote:\n> At Mon, 15 Jun 2020 13:44:31 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n>> On Mon, Jun 15, 2020 at 12:40:03PM +0900, Fujii Masao wrote:\n>>> BTW, I just wonder why each row in pg_replication_slots needs to have\n>>> min_safe_lsn column? Basically min_safe_lsn should be the same between\n>>> every replication slots.\n>>\n>> Indeed, that's confusing in its current shape. I would buy putting\n>> this value into pg_replication_slots if there were some consistency of\n>> the data to worry about because of locking issues, but here this data\n>> is controlled within info_lck, which is out of the the repslot LW\n>> lock. So I think that it is incorrect to put this data in this view\n>> and that we should remove it, and that instead we had better push for\n>> a system view that maps with the contents of XLogCtl.\n\nAgreed. But as you know it's too late to do that for v13...\nSo firstly I'd like to fix the issues in pg_replication_slots view,\nand then we can improve the things later for v14 if necessary.\n\n\n> It was once the difference from the safe_lsn to restart_lsn which is\n> distinct among slots. Then it was changed to the safe_lsn. I agree to\n> the discussion above, but it is needed anywhere since no one can know\n> the margin until the slot goes to the \"lost\" state without it. (Note\n> that currently even wal_status and min_safe_lsn can be inconsistent in\n> a line.)\n> \n> Just for the need for table-consistency and in-line consistency, we\n> could just remember the result of XLogGetLastRemovedSegno() around\n> taking info_lock in the function. That doesn't make a practical\n> difference but makes the view look consistent.\n\nAgreed. Thanks for the patch. Here are the review comments:\n\n\nNot only the last removed segment but also current write position\nshould be obtain at the beginning of pg_get_replication_slots()\nand should be given to GetWALAvailability(), for the consistency?\n\n\nEven after applying your patch, min_safe_lsn is calculated for\neach slot even though the calculated result is always the same.\nWhich is a bit waste of cycle. We should calculate min_safe_lsn\nat the beginning and use it for each slot?\n\n\n\t\t\tXLogSegNoOffsetToRecPtr(last_removed_seg + 1, 0,\n\nIsn't it better to use 1 as the second argument of the above,\nin order to address the issue that I reported upthread?\nOtherwise, the WAL file name that pg_walfile_name(min_safe_lsn) returns\nwould be confusing.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 17 Jun 2020 21:37:55 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "Thanks for the comments.\n\nAt Wed, 17 Jun 2020 21:37:55 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2020/06/15 16:35, Kyotaro Horiguchi wrote:\n> > At Mon, 15 Jun 2020 13:44:31 +0900, Michael Paquier\n> > <michael@paquier.xyz> wrote in\n> >> On Mon, Jun 15, 2020 at 12:40:03PM +0900, Fujii Masao wrote:\n> >>> BTW, I just wonder why each row in pg_replication_slots needs to have\n> >>> min_safe_lsn column? Basically min_safe_lsn should be the same between\n> >>> every replication slots.\n> >>\n> >> Indeed, that's confusing in its current shape. I would buy putting\n> >> this value into pg_replication_slots if there were some consistency of\n> >> the data to worry about because of locking issues, but here this data\n> >> is controlled within info_lck, which is out of the the repslot LW\n> >> lock. So I think that it is incorrect to put this data in this view\n> >> and that we should remove it, and that instead we had better push for\n> >> a system view that maps with the contents of XLogCtl.\n> \n> Agreed. But as you know it's too late to do that for v13...\n> So firstly I'd like to fix the issues in pg_replication_slots view,\n> and then we can improve the things later for v14 if necessary.\n> \n> \n> > It was once the difference from the safe_lsn to restart_lsn which is\n> > distinct among slots. Then it was changed to the safe_lsn. I agree to\n> > the discussion above, but it is needed anywhere since no one can know\n> > the margin until the slot goes to the \"lost\" state without it. (Note\n> > that currently even wal_status and min_safe_lsn can be inconsistent in\n> > a line.)\n> > Just for the need for table-consistency and in-line consistency, we\n> > could just remember the result of XLogGetLastRemovedSegno() around\n> > taking info_lock in the function. That doesn't make a practical\n> > difference but makes the view look consistent.\n> \n> Agreed. Thanks for the patch. Here are the review comments:\n> \n> \n> Not only the last removed segment but also current write position\n> should be obtain at the beginning of pg_get_replication_slots()\n> and should be given to GetWALAvailability(), for the consistency?\n\nYou are right. Though I faintly thought that I didn't need that since\nWriteRecPtr doesn't move by so wide steps as removed_segment, actually\nit moves.\n\n> Even after applying your patch, min_safe_lsn is calculated for\n> each slot even though the calculated result is always the same.\n> Which is a bit waste of cycle. We should calculate min_safe_lsn\n> at the beginning and use it for each slot?\n\nAgreed. That may results in a wastful calculation but it's better than\nrepeated wasteful calculations.\n\n> \t\t\tXLogSegNoOffsetToRecPtr(last_removed_seg + 1, 0,\n> \n> Isn't it better to use 1 as the second argument of the above,\n> in order to address the issue that I reported upthread?\n> Otherwise, the WAL file name that pg_walfile_name(min_safe_lsn)\n> returns\n> would be confusing.\n\nMmm. pg_walfile_name seems too specialize to\npg_stop_backup(). (pg_walfile_name_offset() returns wrong result for\nsegment boundaries.) I'm not willing to do that only to follow such\nsuspicious(?) specification, but surely it would practically be better\ndoing that. Please find the attached first patch. I found that\nthere's no reason to hide min_safe_lsn when wal_status has certain\nvalues. So I changed it to be shown always.\n\nBy the way, I noticed that when a replication slot reserves all\nexisting WAL segments, checkpoint cannot remove a file and\nlastRemovedSegment is left being 0. The second attached forces\nRemoveOldXlogFiles to initialize the variable even when no WAL\nsegments are removed. It puts no additional loads on file system\nsince the directory is scanned anyway. My old proposal to\nunconditionally initialize it separately from checkpoint was rejected,\nbut I think this is acceptable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 18 Jun 2020 15:22:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 11:52 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 17 Jun 2020 21:37:55 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> > On 2020/06/15 16:35, Kyotaro Horiguchi wrote:\n> > Isn't it better to use 1 as the second argument of the above,\n> > in order to address the issue that I reported upthread?\n> > Otherwise, the WAL file name that pg_walfile_name(min_safe_lsn)\n> > returns\n> > would be confusing.\n>\n> Mmm. pg_walfile_name seems too specialize to\n> pg_stop_backup(). (pg_walfile_name_offset() returns wrong result for\n> segment boundaries.) I'm not willing to do that only to follow such\n> suspicious(?) specification, but surely it would practically be better\n> doing that. Please find the attached first patch.\n>\n\nIt is a little unclear to me how this or any proposed patch will solve\nthe original problem reported by Fujii-San? Basically, the problem\narises because we don't have an interlock between when the checkpoint\nremoves the WAL segment and the view tries to acquire the same. Am, I\nmissing something?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jun 2020 18:18:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Thu, 18 Jun 2020 18:18:37 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Jun 18, 2020 at 11:52 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 17 Jun 2020 21:37:55 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> > > On 2020/06/15 16:35, Kyotaro Horiguchi wrote:\n> > > Isn't it better to use 1 as the second argument of the above,\n> > > in order to address the issue that I reported upthread?\n> > > Otherwise, the WAL file name that pg_walfile_name(min_safe_lsn)\n> > > returns\n> > > would be confusing.\n> >\n> > Mmm. pg_walfile_name seems too specialize to\n> > pg_stop_backup(). (pg_walfile_name_offset() returns wrong result for\n> > segment boundaries.) I'm not willing to do that only to follow such\n> > suspicious(?) specification, but surely it would practically be better\n> > doing that. Please find the attached first patch.\n> >\n> \n> It is a little unclear to me how this or any proposed patch will solve\n> the original problem reported by Fujii-San? Basically, the problem\n> arises because we don't have an interlock between when the checkpoint\n> removes the WAL segment and the view tries to acquire the same. Am, I\n> missing something?\n\nI'm not sure, but I don't get the point of blocking WAL segment\nremoval until the view is completed. The said columns of the view are\njust for monitoring, which needs an information snapshot seemingly\ntaken at a certain time. And InvalidateObsoleteReplicationSlots kills\nwalsenders using lastRemovedSegNo of a different time. The two are\nindependent each other.\n\nAlso the patch changes min_safe_lsn to show an LSN at segment boundary\n+ 1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 19 Jun 2020 10:02:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 10:02:54AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 18 Jun 2020 18:18:37 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n>> It is a little unclear to me how this or any proposed patch will solve\n>> the original problem reported by Fujii-San? Basically, the problem\n>> arises because we don't have an interlock between when the checkpoint\n>> removes the WAL segment and the view tries to acquire the same. Am, I\n>> missing something?\n\nThe proposed patch fetches the computation of the minimum LSN across\nall slots before taking ReplicationSlotControlLock so its value gets\nmore lossy, and potentially older than what the slots actually\ninclude. So it is an attempt to take the safest spot possible.\n\nHonestly, I find a bit silly the design to compute and use the same\nminimum LSN value for all the tuples returned by\npg_get_replication_slots, and you can actually get a pretty good\nestimate of that by emulating ReplicationSlotsComputeRequiredLSN()\ndirectly with what pg_replication_slot provides as we have a min()\naggregate for pg_lsn.\n\nFor these reasons, I think that we should remove for now this\ninformation from the view, and reconsider this part more carefully for\n14~ with a clear definition of how much lossiness we are ready to\naccept for the information provided here, if necessary. We could for\nexample just have a separate SQL function that just grabs this value\n(or a more global SQL view for XLogCtl data that includes this data).\n\n> I'm not sure, but I don't get the point of blocking WAL segment\n> removal until the view is completed.\n\nWe should really not do that anyway for a monitoring view.\n--\nMichael",
"msg_date": "Fri, 19 Jun 2020 10:39:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Fri, 19 Jun 2020 10:39:58 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Jun 19, 2020 at 10:02:54AM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 18 Jun 2020 18:18:37 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> >> It is a little unclear to me how this or any proposed patch will solve\n> >> the original problem reported by Fujii-San? Basically, the problem\n> >> arises because we don't have an interlock between when the checkpoint\n> >> removes the WAL segment and the view tries to acquire the same. Am, I\n> >> missing something?\n> \n> The proposed patch fetches the computation of the minimum LSN across\n> all slots before taking ReplicationSlotControlLock so its value gets\n> more lossy, and potentially older than what the slots actually\n> include. So it is an attempt to take the safest spot possible.\n\nMinimum LSN (lastRemovedSegNo) is not protected by the lock. That\nmakes no defference.\n\n> Honestly, I find a bit silly the design to compute and use the same\n> minimum LSN value for all the tuples returned by\n> pg_get_replication_slots, and you can actually get a pretty good\n\nI see it as silly. I think I said upthread that it was the distance\nto the point where the slot loses a segment, and it was rejected but\njust removing it makes us unable to estimate the distance so it is\nthere.\n\n> estimate of that by emulating ReplicationSlotsComputeRequiredLSN()\n> directly with what pg_replication_slot provides as we have a min()\n> aggregate for pg_lsn.\n\nmin(lastRemovedSegNo) is the earliest value. It is enough to read it\nat the first then use it in all slots.\n\n> For these reasons, I think that we should remove for now this\n> information from the view, and reconsider this part more carefully for\n> 14~ with a clear definition of how much lossiness we are ready to\n> accept for the information provided here, if necessary. We could for\n> example just have a separate SQL function that just grabs this value\n> (or a more global SQL view for XLogCtl data that includes this data).\n\nI think, we need at least one of the \"distance\" above or min_safe_lsn\nin anywhere reachable from users.\n\n> > I'm not sure, but I don't get the point of blocking WAL segment\n> > removal until the view is completed.\n> \n> We should really not do that anyway for a monitoring view.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 19 Jun 2020 12:13:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 6:32 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 18 Jun 2020 18:18:37 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Thu, Jun 18, 2020 at 11:52 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 17 Jun 2020 21:37:55 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> > > > On 2020/06/15 16:35, Kyotaro Horiguchi wrote:\n> > > > Isn't it better to use 1 as the second argument of the above,\n> > > > in order to address the issue that I reported upthread?\n> > > > Otherwise, the WAL file name that pg_walfile_name(min_safe_lsn)\n> > > > returns\n> > > > would be confusing.\n> > >\n> > > Mmm. pg_walfile_name seems too specialize to\n> > > pg_stop_backup(). (pg_walfile_name_offset() returns wrong result for\n> > > segment boundaries.) I'm not willing to do that only to follow such\n> > > suspicious(?) specification, but surely it would practically be better\n> > > doing that. Please find the attached first patch.\n> > >\n> >\n> > It is a little unclear to me how this or any proposed patch will solve\n> > the original problem reported by Fujii-San? Basically, the problem\n> > arises because we don't have an interlock between when the checkpoint\n> > removes the WAL segment and the view tries to acquire the same. Am, I\n> > missing something?\n>\n> I'm not sure, but I don't get the point of blocking WAL segment\n> removal until the view is completed.\n>\n\nI am not suggesting to do that.\n\n> The said columns of the view are\n> just for monitoring, which needs an information snapshot seemingly\n> taken at a certain time. And InvalidateObsoleteReplicationSlots kills\n> walsenders using lastRemovedSegNo of a different time. The two are\n> independent each other.\n>\n> Also the patch changes min_safe_lsn to show an LSN at segment boundary\n> + 1.\n>\n\nBut aren't we doing last_removed_seg+1 even without the patch? See code below\n\n- {\n- XLogRecPtr min_safe_lsn;\n-\n- XLogSegNoOffsetToRecPtr(last_removed_seg + 1, 0,\n- wal_segment_size, min_safe_lsn);\n\n\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jun 2020 08:59:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 8:44 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 19 Jun 2020 10:39:58 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> > Honestly, I find a bit silly the design to compute and use the same\n> > minimum LSN value for all the tuples returned by\n> > pg_get_replication_slots, and you can actually get a pretty good\n>\n> I see it as silly. I think I said upthread that it was the distance\n> to the point where the slot loses a segment, and it was rejected but\n> just removing it makes us unable to estimate the distance so it is\n> there.\n>\n\nIIUC, the value of min_safe_lsn will lesser than restart_lsn, so one\ncan compute the difference of those to see how much ahead the\nreplication slot's restart_lsn is from min_safe_lsn but still it is\nnot clear how user will make any use of it. Can you please explain\nhow the distance you are talking about is useful to users or anyone?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jun 2020 09:09:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Fri, 19 Jun 2020 09:09:03 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Fri, Jun 19, 2020 at 8:44 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Fri, 19 Jun 2020 10:39:58 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> > > Honestly, I find a bit silly the design to compute and use the same\n> > > minimum LSN value for all the tuples returned by\n> > > pg_get_replication_slots, and you can actually get a pretty good\n> >\n> > I see it as silly. I think I said upthread that it was the distance\n> > to the point where the slot loses a segment, and it was rejected but\n> > just removing it makes us unable to estimate the distance so it is\n> > there.\n> >\n> \n> IIUC, the value of min_safe_lsn will lesser than restart_lsn, so one\n> can compute the difference of those to see how much ahead the\n> replication slot's restart_lsn is from min_safe_lsn but still it is\n> not clear how user will make any use of it. Can you please explain\n> how the distance you are talking about is useful to users or anyone?\n\nWhen max_slot_wal_keep_size is set, the slot may retain up to as many\nas that amount of old WAL segments then suddenly loses the oldest\nsegments. *I* thought that I would use it in an HA cluster tool to\ninform users about the remaining time (not literally, of course) a\ndisconnected standy is allowed diconnected. Of course even if some\nsegments have been lost, they could be copied from the primary's\narchive so that's not critical in theory.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 19 Jun 2020 14:35:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Fri, 19 Jun 2020 08:59:48 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> > > > Mmm. pg_walfile_name seems too specialize to\n> > > > pg_stop_backup(). (pg_walfile_name_offset() returns wrong result for\n> > > > segment boundaries.) I'm not willing to do that only to follow such\n> > > > suspicious(?) specification, but surely it would practically be better\n> > > > doing that. Please find the attached first patch.\n> > > >\n> > >\n> > > It is a little unclear to me how this or any proposed patch will solve\n> > > the original problem reported by Fujii-San? Basically, the problem\n> > > arises because we don't have an interlock between when the checkpoint\n> > > removes the WAL segment and the view tries to acquire the same. Am, I\n> > > missing something?\n> >\n> > I'm not sure, but I don't get the point of blocking WAL segment\n> > removal until the view is completed.\n> >\n> \n> I am not suggesting to do that.\n> \n> > The said columns of the view are\n> > just for monitoring, which needs an information snapshot seemingly\n> > taken at a certain time. And InvalidateObsoleteReplicationSlots kills\n> > walsenders using lastRemovedSegNo of a different time. The two are\n> > independent each other.\n> >\n> > Also the patch changes min_safe_lsn to show an LSN at segment boundary\n> > + 1.\n> >\n> \n> But aren't we doing last_removed_seg+1 even without the patch? See code below\n> \n> - {\n> - XLogRecPtr min_safe_lsn;\n> -\n> - XLogSegNoOffsetToRecPtr(last_removed_seg + 1, 0,\n> - wal_segment_size, min_safe_lsn);\n\nIt is at the beginning byte of the *next* segment. Fujii-san told that\nit should be the next byte of it, namely\n\"XLogSegNoOffsetToRecPtr(last_removed_seg + 1, *1*,\", and the patch\ncalculates as that. It adds the follows instead.\n\n+\tif (max_slot_wal_keep_size_mb >= 0 && last_removed_seg != 0)\n+\t\tXLogSegNoOffsetToRecPtr(last_removed_seg + 1, 1,\n+\t\t\t\t\t\t\t\twal_segment_size, min_safe_lsn);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 19 Jun 2020 14:42:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/19 10:39, Michael Paquier wrote:\n> On Fri, Jun 19, 2020 at 10:02:54AM +0900, Kyotaro Horiguchi wrote:\n>> At Thu, 18 Jun 2020 18:18:37 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n>>> It is a little unclear to me how this or any proposed patch will solve\n>>> the original problem reported by Fujii-San? Basically, the problem\n>>> arises because we don't have an interlock between when the checkpoint\n>>> removes the WAL segment and the view tries to acquire the same. Am, I\n>>> missing something?\n> \n> The proposed patch fetches the computation of the minimum LSN across\n> all slots before taking ReplicationSlotControlLock so its value gets\n> more lossy, and potentially older than what the slots actually\n> include. So it is an attempt to take the safest spot possible.\n> \n> Honestly, I find a bit silly the design to compute and use the same\n> minimum LSN value for all the tuples returned by\n> pg_get_replication_slots, and you can actually get a pretty good\n> estimate of that by emulating ReplicationSlotsComputeRequiredLSN()\n> directly with what pg_replication_slot provides as we have a min()\n> aggregate for pg_lsn.\n> \n> For these reasons, I think that we should remove for now this\n> information from the view, and reconsider this part more carefully for\n> 14~ with a clear definition of how much lossiness we are ready to\n> accept for the information provided here, if necessary.\n\nAgreed. But isn't it too late to remove the columns (i.e., change\nthe catalog) for v13? Because v13 beta1 was already released.\nIIUC the catalog should not be changed since beta1 release so that\nusers can upgrade PostgreSQL without initdb.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Jun 2020 16:13:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 04:13:27PM +0900, Fujii Masao wrote:\n> Agreed. But isn't it too late to remove the columns (i.e., change\n> the catalog) for v13? Because v13 beta1 was already released.\n> IIUC the catalog should not be changed since beta1 release so that\n> users can upgrade PostgreSQL without initdb.\n\nCatalog bumps have happened in the past between beta versions:\ngit log -p REL_12_BETA1..REL_12_BETA2 src/include/catalog/catversion.h\ngit log -p REL_11_BETA1..REL_11_BETA2 src/include/catalog/catversion.h\ngit log -p REL_10_BETA1..REL_10_BETA2 src/include/catalog/catversion.h\n\nSo we usually avoid to do that between betas, but my take here is that\na catalog bump is better than regretting a change we may have to live\nwith after the release is sealed.\n--\nMichael",
"msg_date": "Fri, 19 Jun 2020 16:36:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Fri, 19 Jun 2020 16:36:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Jun 19, 2020 at 04:13:27PM +0900, Fujii Masao wrote:\n> > Agreed. But isn't it too late to remove the columns (i.e., change\n> > the catalog) for v13? Because v13 beta1 was already released.\n> > IIUC the catalog should not be changed since beta1 release so that\n> > users can upgrade PostgreSQL without initdb.\n> \n> Catalog bumps have happened in the past between beta versions:\n> git log -p REL_12_BETA1..REL_12_BETA2 src/include/catalog/catversion.h\n> git log -p REL_11_BETA1..REL_11_BETA2 src/include/catalog/catversion.h\n> git log -p REL_10_BETA1..REL_10_BETA2 src/include/catalog/catversion.h\n> \n> So we usually avoid to do that between betas, but my take here is that\n> a catalog bump is better than regretting a change we may have to live\n> with after the release is sealed.\n\nFWIW if we decide that it is really useless, I agree to remove it now.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 19 Jun 2020 16:43:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/19 16:43, Kyotaro Horiguchi wrote:\n> At Fri, 19 Jun 2020 16:36:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n>> On Fri, Jun 19, 2020 at 04:13:27PM +0900, Fujii Masao wrote:\n>>> Agreed. But isn't it too late to remove the columns (i.e., change\n>>> the catalog) for v13? Because v13 beta1 was already released.\n>>> IIUC the catalog should not be changed since beta1 release so that\n>>> users can upgrade PostgreSQL without initdb.\n>>\n>> Catalog bumps have happened in the past between beta versions:\n>> git log -p REL_12_BETA1..REL_12_BETA2 src/include/catalog/catversion.h\n>> git log -p REL_11_BETA1..REL_11_BETA2 src/include/catalog/catversion.h\n>> git log -p REL_10_BETA1..REL_10_BETA2 src/include/catalog/catversion.h\n>>\n>> So we usually avoid to do that between betas, but my take here is that\n>> a catalog bump is better than regretting a change we may have to live\n>> with after the release is sealed.\n\nSounds reasonable.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Jun 2020 17:34:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 05:34:01PM +0900, Fujii Masao wrote:\n> On 2020/06/19 16:43, Kyotaro Horiguchi wrote:\n>> At Fri, 19 Jun 2020 16:36:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n>>> So we usually avoid to do that between betas, but my take here is that\n>>> a catalog bump is better than regretting a change we may have to live\n>>> with after the release is sealed.\n> \n> Sounds reasonable.\n\nIf we want to make this happen, I am afraid that the time is short as\nbeta2 is planned for next week. As the version will be likely tagged\nby Monday US time, it would be good to get this addressed within 48\nhours to give some room to the buildfarm to react. Attached is a\nstraight-forward proposal of patch. Any thoughts?\n\n(The change in catversion.h is a self-reminder.)\n--\nMichael",
"msg_date": "Fri, 19 Jun 2020 21:15:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Fri, 19 Jun 2020 21:15:52 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Jun 19, 2020 at 05:34:01PM +0900, Fujii Masao wrote:\n> > On 2020/06/19 16:43, Kyotaro Horiguchi wrote:\n> >> At Fri, 19 Jun 2020 16:36:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> >>> So we usually avoid to do that between betas, but my take here is that\n> >>> a catalog bump is better than regretting a change we may have to live\n> >>> with after the release is sealed.\n> > \n> > Sounds reasonable.\n> \n> If we want to make this happen, I am afraid that the time is short as\n> beta2 is planned for next week. As the version will be likely tagged\n> by Monday US time, it would be good to get this addressed within 48\n> hours to give some room to the buildfarm to react. Attached is a\n> straight-forward proposal of patch. Any thoughts?\n> \n> (The change in catversion.h is a self-reminder.)\n\nThanks for the patch.\n\nAs a whole it contains all needed for ripping off the min_safe_lsn.\nSome items in the TAP test gets coarse but none of them lose\nsignificance. Compiles almost cleanly and passes all tests including\nTAP test.\n\nThe variable last_removed_seg in slotfuncs.c:285 is left alone but no\nlonger used after applying this patch. It should be removed as well.\n\nOther than that the patch looks good to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 19 Jun 2020 21:53:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/19 21:15, Michael Paquier wrote:\n> On Fri, Jun 19, 2020 at 05:34:01PM +0900, Fujii Masao wrote:\n>> On 2020/06/19 16:43, Kyotaro Horiguchi wrote:\n>>> At Fri, 19 Jun 2020 16:36:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n>>>> So we usually avoid to do that between betas, but my take here is that\n>>>> a catalog bump is better than regretting a change we may have to live\n>>>> with after the release is sealed.\n>>\n>> Sounds reasonable.\n> \n> If we want to make this happen, I am afraid that the time is short as\n> beta2 is planned for next week. As the version will be likely tagged\n> by Monday US time, it would be good to get this addressed within 48\n> hours to give some room to the buildfarm to react. Attached is a\n> straight-forward proposal of patch. Any thoughts?\n\nIt's better if we can do that. But I think that we should hear Alvaro's opinion\nabout this before rushing to push the patch. Even if we miss beta2 as the result\nof that, I'm ok. We would be able to push something better into beta3.\nSo, CC Alvaro.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 20 Jun 2020 09:39:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jun-20, Fujii Masao wrote:\n\n> It's better if we can do that. But I think that we should hear Alvaro's opinion\n> about this before rushing to push the patch. Even if we miss beta2 as the result\n> of that, I'm ok. We would be able to push something better into beta3.\n> So, CC Alvaro.\n\nUh, I was not aware of this thread. I'll go over it now and let you\nknow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 21:18:55 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jun-19, Michael Paquier wrote:\n\n> If we want to make this happen, I am afraid that the time is short as\n> beta2 is planned for next week. As the version will be likely tagged\n> by Monday US time, it would be good to get this addressed within 48\n> hours to give some room to the buildfarm to react. Attached is a\n> straight-forward proposal of patch. Any thoughts?\n\nI don't disagree with removing the LSN column, but at the same time we\nneed to provide *some* way for users to monitor this, so let's add a\nfunction to extract the value they need for that. It seems simple\nenough.\n\nI cannot implement it myself now, though. I've reached the end of my\nweek and I'm not sure I'll be able to work on it during the weekend.\n\nI agree with Kyotaro's opinion that the pg_walfile_name() function seems\ntoo single-minded ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 21:42:23 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 7:12 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jun-19, Michael Paquier wrote:\n>\n> > If we want to make this happen, I am afraid that the time is short as\n> > beta2 is planned for next week. As the version will be likely tagged\n> > by Monday US time, it would be good to get this addressed within 48\n> > hours to give some room to the buildfarm to react. Attached is a\n> > straight-forward proposal of patch. Any thoughts?\n>\n> I don't disagree with removing the LSN column, but at the same time we\n> need to provide *some* way for users to monitor this, so let's add a\n> function to extract the value they need for that. It seems simple\n> enough.\n>\n\nIsn't this information specific to checkpoints, so maybe better to\ndisplay in view pg_stat_bgwriter?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 20 Jun 2020 09:45:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 09:45:52AM +0530, Amit Kapila wrote:\n> On Sat, Jun 20, 2020 at 7:12 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> I don't disagree with removing the LSN column, but at the same time we\n>> need to provide *some* way for users to monitor this, so let's add a\n>> function to extract the value they need for that. It seems simple\n>> enough.\n> \n> Isn't this information specific to checkpoints, so maybe better to\n> display in view pg_stat_bgwriter?\n\nNot sure that's a good match. If we decide to expose that, a separate\nfunction returning a LSN based on the segment number from\nXLogGetLastRemovedSegno() sounds fine to me, like\npg_wal_last_recycled_lsn(). Perhaps somebody has a better name in\nmind?\n--\nMichael",
"msg_date": "Sat, 20 Jun 2020 15:53:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 03:53:54PM +0900, Michael Paquier wrote:\n> On Sat, Jun 20, 2020 at 09:45:52AM +0530, Amit Kapila wrote:\n>> Isn't this information specific to checkpoints, so maybe better to\n>> display in view pg_stat_bgwriter?\n> \n> Not sure that's a good match. If we decide to expose that, a separate\n> function returning a LSN based on the segment number from\n> XLogGetLastRemovedSegno() sounds fine to me, like\n> pg_wal_last_recycled_lsn(). Perhaps somebody has a better name in\n> mind?\n\nI was thinking on this one for the last couple of days, and came up\nwith the name pg_wal_oldest_lsn(), as per the attached, traking the\noldest WAL location still available. That's unfortunately too late\nfor beta2, but let's continue the discussion.\n--\nMichael",
"msg_date": "Mon, 22 Jun 2020 14:49:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 11:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jun 20, 2020 at 03:53:54PM +0900, Michael Paquier wrote:\n> > On Sat, Jun 20, 2020 at 09:45:52AM +0530, Amit Kapila wrote:\n> >> Isn't this information specific to checkpoints, so maybe better to\n> >> display in view pg_stat_bgwriter?\n> >\n> > Not sure that's a good match. If we decide to expose that, a separate\n> > function returning a LSN based on the segment number from\n> > XLogGetLastRemovedSegno() sounds fine to me, like\n> > pg_wal_last_recycled_lsn(). Perhaps somebody has a better name in\n> > mind?\n>\n> I was thinking on this one for the last couple of days, and came up\n> with the name pg_wal_oldest_lsn(), as per the attached, traking the\n> oldest WAL location still available.\n>\n\nI feel such a function is good to have but I am not sure if there is a\nneed to tie it with the removal of min_safe_lsn column.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Jun 2020 17:31:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/22 21:01, Amit Kapila wrote:\n> On Mon, Jun 22, 2020 at 11:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Sat, Jun 20, 2020 at 03:53:54PM +0900, Michael Paquier wrote:\n>>> On Sat, Jun 20, 2020 at 09:45:52AM +0530, Amit Kapila wrote:\n>>>> Isn't this information specific to checkpoints, so maybe better to\n>>>> display in view pg_stat_bgwriter?\n>>>\n>>> Not sure that's a good match. If we decide to expose that, a separate\n>>> function returning a LSN based on the segment number from\n>>> XLogGetLastRemovedSegno() sounds fine to me, like\n>>> pg_wal_last_recycled_lsn(). Perhaps somebody has a better name in\n>>> mind?\n>>\n>> I was thinking on this one for the last couple of days, and came up\n>> with the name pg_wal_oldest_lsn(), as per the attached, traking the\n>> oldest WAL location still available.\n\nThanks for the patch!\n\n+ <literal>NULL</literal> if no WAL segments have been removed since\n+ startup.\n\nIsn't this confusing? I think that we should store the last removed\nWAL segment to somewhere (e.g., pg_control) and restore it at\nthe startup, so that we can see the actual value even after the startup.\nOr we should scan pg_wal directory and find the \"minimal\" WAL segment\nand return its LSN.\n\n\n> I feel such a function is good to have but I am not sure if there is a\n> need to tie it with the removal of min_safe_lsn column.\n\nWe should expose the LSN calculated from\n\"the current WAL LSN - max(wal_keep_segments * 16MB, max_slot_wal_keep_size)\"?\nThis indicates the minimum LSN of WAL files that are guaraneed to be\ncurrently retained by wal_keep_segments and max_slot_wal_keep_size.\nThat is, if checkpoint occurs when restart_lsn of replication slot is\nsmaller than that minimum LSN, some required WAL files may be removed.\n\nSo DBAs can periodically monitor and compare restart_lsn and that minimum\nLSN. If they see frequently that difference of those LSN is very small,\nthey can decide to increase wal_keep_segments or max_slot_wal_keep_size,\nto prevent required WAL files from being removed. Thought?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 22 Jun 2020 22:02:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Mon, 22 Jun 2020 22:02:51 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/06/22 21:01, Amit Kapila wrote:\n> > On Mon, Jun 22, 2020 at 11:19 AM Michael Paquier <michael@paquier.xyz>\n> > wrote:\n> >>\n> >> On Sat, Jun 20, 2020 at 03:53:54PM +0900, Michael Paquier wrote:\n> >>> On Sat, Jun 20, 2020 at 09:45:52AM +0530, Amit Kapila wrote:\n> >>>> Isn't this information specific to checkpoints, so maybe better to\n> >>>> display in view pg_stat_bgwriter?\n> >>>\n> >>> Not sure that's a good match. If we decide to expose that, a separate\n> >>> function returning a LSN based on the segment number from\n> >>> XLogGetLastRemovedSegno() sounds fine to me, like\n> >>> pg_wal_last_recycled_lsn(). Perhaps somebody has a better name in\n> >>> mind?\n> >>\n> >> I was thinking on this one for the last couple of days, and came up\n> >> with the name pg_wal_oldest_lsn(), as per the attached, traking the\n> >> oldest WAL location still available.\n> \n> Thanks for the patch!\n> \n> + <literal>NULL</literal> if no WAL segments have been removed since\n> + startup.\n> \n> Isn't this confusing? I think that we should store the last removed\n> WAL segment to somewhere (e.g., pg_control) and restore it at\n> the startup, so that we can see the actual value even after the\n> startup.\n> Or we should scan pg_wal directory and find the \"minimal\" WAL segment\n> and return its LSN.\n\nRunning a separate scan on pg_wal at startup or first time the oldest\nWAL segno is referenced is something that was rejected before. But\nwith the current behavior we don't find the last removed segment until\nany slot loses a segment if all WAL files are retained by a slot. FWIW\nI recently proposed a patch to find the oldest WAL file while trying\nremoving old WAL files.\n\n> > I feel such a function is good to have but I am not sure if there is a\n> > need to tie it with the removal of min_safe_lsn column.\n> \n> We should expose the LSN calculated from\n> \"the current WAL LSN - max(wal_keep_segments * 16MB,\n> max_slot_wal_keep_size)\"?\n> This indicates the minimum LSN of WAL files that are guaraneed to be\n> currently retained by wal_keep_segments and max_slot_wal_keep_size.\n> That is, if checkpoint occurs when restart_lsn of replication slot is\n> smaller than that minimum LSN, some required WAL files may be removed.\n> So DBAs can periodically monitor and compare restart_lsn and that\n> minimum\n> LSN. If they see frequently that difference of those LSN is very\n> small,\n> they can decide to increase wal_keep_segments or\n> max_slot_wal_keep_size,\n> to prevent required WAL files from being removed. Thought?\n\nI'm not sure about the consensus here about showing that number in the\nview. It is similar to \"remain\" in the earlier versions of this patch\nbut a bit simpler. It would be usable in a similar way. I can live\nwith either numbers.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 23 Jun 2020 10:10:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Tue, Jun 23, 2020 at 10:10:37AM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 22 Jun 2020 22:02:51 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n>> Isn't this confusing? I think that we should store the last removed\n>> WAL segment to somewhere (e.g., pg_control) and restore it at\n>> the startup, so that we can see the actual value even after the\n>> startup.\n>> Or we should scan pg_wal directory and find the \"minimal\" WAL segment\n>> and return its LSN.\n> \n> Running a separate scan on pg_wal at startup or first time the oldest\n> WAL segno is referenced is something that was rejected before. But\n> with the current behavior we don't find the last removed segment until\n> any slot loses a segment if all WAL files are retained by a slot. FWIW\n> I recently proposed a patch to find the oldest WAL file while trying\n> removing old WAL files.\n\nHmm. I agree that the approach I previously sent may be kind of\nconfusing without a clear initialization point, which would actually\nbe (checkPointCopy.redo + checkPointCopy.ThisTimeLineID) from the\ncontrol file with an extra computation depending on any replication\nslot data present on disk? So one could do the maths cleanly after\nStartupReplicationSlots() is called in the startup process. My point\nis: it does not seem really obvious to me that we need to change the\ncontrol file to track that.\n\n> I'm not sure about the consensus here about showing that number in the\n> view. It is similar to \"remain\" in the earlier versions of this patch\n> but a bit simpler. It would be usable in a similar way. I can live\n> with either numbers.\n\nAnyway, here is my take. We are discussing a design issue here, we\nare moving the discussion into having a different design, and\ndiscussing new designs is never a good sign post-beta (some open items\ntend to move towards this direction every year). So I'd like to think\nthat the best thing we can do here is just to drop min_safe_lsn from\npg_replication_slots, and just reconsider this part for 14~ with\nsomething we think is better.\n\nBy the way, I have added a separate open item for this thread.\n--\nMichael",
"msg_date": "Tue, 23 Jun 2020 11:16:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/23 10:10, Kyotaro Horiguchi wrote:\n> At Mon, 22 Jun 2020 22:02:51 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/06/22 21:01, Amit Kapila wrote:\n>>> On Mon, Jun 22, 2020 at 11:19 AM Michael Paquier <michael@paquier.xyz>\n>>> wrote:\n>>>>\n>>>> On Sat, Jun 20, 2020 at 03:53:54PM +0900, Michael Paquier wrote:\n>>>>> On Sat, Jun 20, 2020 at 09:45:52AM +0530, Amit Kapila wrote:\n>>>>>> Isn't this information specific to checkpoints, so maybe better to\n>>>>>> display in view pg_stat_bgwriter?\n>>>>>\n>>>>> Not sure that's a good match. If we decide to expose that, a separate\n>>>>> function returning a LSN based on the segment number from\n>>>>> XLogGetLastRemovedSegno() sounds fine to me, like\n>>>>> pg_wal_last_recycled_lsn(). Perhaps somebody has a better name in\n>>>>> mind?\n>>>>\n>>>> I was thinking on this one for the last couple of days, and came up\n>>>> with the name pg_wal_oldest_lsn(), as per the attached, traking the\n>>>> oldest WAL location still available.\n>>\n>> Thanks for the patch!\n>>\n>> + <literal>NULL</literal> if no WAL segments have been removed since\n>> + startup.\n>>\n>> Isn't this confusing? I think that we should store the last removed\n>> WAL segment to somewhere (e.g., pg_control) and restore it at\n>> the startup, so that we can see the actual value even after the\n>> startup.\n>> Or we should scan pg_wal directory and find the \"minimal\" WAL segment\n>> and return its LSN.\n> \n> Running a separate scan on pg_wal at startup or first time the oldest\n> WAL segno is referenced is something that was rejected before. But\n> with the current behavior we don't find the last removed segment until\n> any slot loses a segment if all WAL files are retained by a slot.\n\nBecause scanning pg_wal can be heavy operation especially when\nmax_wal_size is high and there are lots of WAL files? If so, it might\nbe better to save the value in pg_control as I told upthread.\n\nHowever I'm not sure the use case of this function yet...\n\n> FWIW\n> I recently proposed a patch to find the oldest WAL file while trying\n> removing old WAL files.\n> \n>>> I feel such a function is good to have but I am not sure if there is a\n>>> need to tie it with the removal of min_safe_lsn column.\n>>\n>> We should expose the LSN calculated from\n>> \"the current WAL LSN - max(wal_keep_segments * 16MB,\n>> max_slot_wal_keep_size)\"?\n>> This indicates the minimum LSN of WAL files that are guaraneed to be\n>> currently retained by wal_keep_segments and max_slot_wal_keep_size.\n>> That is, if checkpoint occurs when restart_lsn of replication slot is\n>> smaller than that minimum LSN, some required WAL files may be removed.\n>> So DBAs can periodically monitor and compare restart_lsn and that\n>> minimum\n>> LSN. If they see frequently that difference of those LSN is very\n>> small,\n>> they can decide to increase wal_keep_segments or\n>> max_slot_wal_keep_size,\n>> to prevent required WAL files from being removed. Thought?\n> \n> I'm not sure about the consensus here about showing that number in the\n> view. It is similar to \"remain\" in the earlier versions of this patch\n> but a bit simpler. It would be usable in a similar way. I can live\n> with either numbers.\n\nIt's useless to display this value in each replication slot in the view.\nI'm thinking to expose it as a function.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 23 Jun 2020 11:17:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 6:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/22 21:01, Amit Kapila wrote:\n> > On Mon, Jun 22, 2020 at 11:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> On Sat, Jun 20, 2020 at 03:53:54PM +0900, Michael Paquier wrote:\n> >>> On Sat, Jun 20, 2020 at 09:45:52AM +0530, Amit Kapila wrote:\n> >>>> Isn't this information specific to checkpoints, so maybe better to\n> >>>> display in view pg_stat_bgwriter?\n> >>>\n> >>> Not sure that's a good match. If we decide to expose that, a separate\n> >>> function returning a LSN based on the segment number from\n> >>> XLogGetLastRemovedSegno() sounds fine to me, like\n> >>> pg_wal_last_recycled_lsn(). Perhaps somebody has a better name in\n> >>> mind?\n> >>\n> >> I was thinking on this one for the last couple of days, and came up\n> >> with the name pg_wal_oldest_lsn(), as per the attached, traking the\n> >> oldest WAL location still available.\n>\n> Thanks for the patch!\n>\n> + <literal>NULL</literal> if no WAL segments have been removed since\n> + startup.\n>\n> Isn't this confusing? I think that we should store the last removed\n> WAL segment to somewhere (e.g., pg_control) and restore it at\n> the startup, so that we can see the actual value even after the startup.\n> Or we should scan pg_wal directory and find the \"minimal\" WAL segment\n> and return its LSN.\n>\n>\n> > I feel such a function is good to have but I am not sure if there is a\n> > need to tie it with the removal of min_safe_lsn column.\n>\n> We should expose the LSN calculated from\n> \"the current WAL LSN - max(wal_keep_segments * 16MB, max_slot_wal_keep_size)\"?\n> This indicates the minimum LSN of WAL files that are guaraneed to be\n> currently retained by wal_keep_segments and max_slot_wal_keep_size.\n> That is, if checkpoint occurs when restart_lsn of replication slot is\n> smaller than that minimum LSN, some required WAL files may be removed.\n>\n> So DBAs can periodically monitor and compare restart_lsn and that minimum\n> LSN. If they see frequently that difference of those LSN is very small,\n> they can decide to increase wal_keep_segments or max_slot_wal_keep_size,\n> to prevent required WAL files from being removed. Thought?\n>\n\n+1. This sounds like a good and useful stat for users.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jun 2020 11:50:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Tue, Jun 23, 2020 at 7:47 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/23 10:10, Kyotaro Horiguchi wrote:\n> > At Mon, 22 Jun 2020 22:02:51 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >>\n> >>> I feel such a function is good to have but I am not sure if there is a\n> >>> need to tie it with the removal of min_safe_lsn column.\n> >>\n> >> We should expose the LSN calculated from\n> >> \"the current WAL LSN - max(wal_keep_segments * 16MB,\n> >> max_slot_wal_keep_size)\"?\n> >> This indicates the minimum LSN of WAL files that are guaraneed to be\n> >> currently retained by wal_keep_segments and max_slot_wal_keep_size.\n> >> That is, if checkpoint occurs when restart_lsn of replication slot is\n> >> smaller than that minimum LSN, some required WAL files may be removed.\n> >> So DBAs can periodically monitor and compare restart_lsn and that\n> >> minimum\n> >> LSN. If they see frequently that difference of those LSN is very\n> >> small,\n> >> they can decide to increase wal_keep_segments or\n> >> max_slot_wal_keep_size,\n> >> to prevent required WAL files from being removed. Thought?\n> >\n> > I'm not sure about the consensus here about showing that number in the\n> > view. It is similar to \"remain\" in the earlier versions of this patch\n> > but a bit simpler. It would be usable in a similar way. I can live\n> > with either numbers.\n>\n> It's useless to display this value in each replication slot in the view.\n> I'm thinking to expose it as a function.\n>\n\nHaving a separate function for this seems like a good idea but can we\nconsider displaying it in a view like pg_stat_replication_slots as we\nare discussing a nearby thread to have such a view for other things.\nI think ultimately this information is required to check whether some\nslot can be invalidated or not, so having it displayed along with\nother slot information might not be a bad idea.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Jyh4qgdnxzV4fYuk9GiXLb%3DUz-6o19E2RfiN8MPmUu3A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jun 2020 11:57:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Tue, 23 Jun 2020 11:50:34 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Jun 22, 2020 at 6:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > On 2020/06/22 21:01, Amit Kapila wrote:\n> > > On Mon, Jun 22, 2020 at 11:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >>\n> > >> On Sat, Jun 20, 2020 at 03:53:54PM +0900, Michael Paquier wrote:\n> > >>> On Sat, Jun 20, 2020 at 09:45:52AM +0530, Amit Kapila wrote:\n> > >>>> Isn't this information specific to checkpoints, so maybe better to\n> > >>>> display in view pg_stat_bgwriter?\n> > >>>\n> > >>> Not sure that's a good match. If we decide to expose that, a separate\n> > >>> function returning a LSN based on the segment number from\n> > >>> XLogGetLastRemovedSegno() sounds fine to me, like\n> > >>> pg_wal_last_recycled_lsn(). Perhaps somebody has a better name in\n> > >>> mind?\n> > >>\n> > >> I was thinking on this one for the last couple of days, and came up\n> > >> with the name pg_wal_oldest_lsn(), as per the attached, traking the\n> > >> oldest WAL location still available.\n> >\n> > Thanks for the patch!\n> >\n> > + <literal>NULL</literal> if no WAL segments have been removed since\n> > + startup.\n> >\n> > Isn't this confusing? I think that we should store the last removed\n> > WAL segment to somewhere (e.g., pg_control) and restore it at\n> > the startup, so that we can see the actual value even after the startup.\n> > Or we should scan pg_wal directory and find the \"minimal\" WAL segment\n> > and return its LSN.\n> >\n> >\n> > > I feel such a function is good to have but I am not sure if there is a\n> > > need to tie it with the removal of min_safe_lsn column.\n> >\n> > We should expose the LSN calculated from\n> > \"the current WAL LSN - max(wal_keep_segments * 16MB, max_slot_wal_keep_size)\"?\n> > This indicates the minimum LSN of WAL files that are guaraneed to be\n> > currently retained by wal_keep_segments and max_slot_wal_keep_size.\n> > That is, if checkpoint occurs when restart_lsn of replication slot is\n> > smaller than that minimum LSN, some required WAL files may be removed.\n> >\n> > So DBAs can periodically monitor and compare restart_lsn and that minimum\n> > LSN. If they see frequently that difference of those LSN is very small,\n> > they can decide to increase wal_keep_segments or max_slot_wal_keep_size,\n> > to prevent required WAL files from being removed. Thought?\n> >\n> \n> +1. This sounds like a good and useful stat for users.\n\n+1 for showing a number that is not involving lastRemovedSegNo. It is\nlike returning to the initial version of this patch. It showed a\nnumber like ((the suggested above) minus restart_lsn). The number is\ndifferent for each slot so they fit in the view.\n\nThe number is usable for the same purpose so I'm ok with it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 23 Jun 2020 17:08:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jun-23, Kyotaro Horiguchi wrote:\n\n> At Tue, 23 Jun 2020 11:50:34 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> > On Mon, Jun 22, 2020 at 6:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> > > We should expose the LSN calculated from\n> > > \"the current WAL LSN - max(wal_keep_segments * 16MB, max_slot_wal_keep_size)\"?\n> > > This indicates the minimum LSN of WAL files that are guaraneed to be\n> > > currently retained by wal_keep_segments and max_slot_wal_keep_size.\n> > > That is, if checkpoint occurs when restart_lsn of replication slot is\n> > > smaller than that minimum LSN, some required WAL files may be removed.\n> > >\n> > > So DBAs can periodically monitor and compare restart_lsn and that minimum\n> > > LSN. If they see frequently that difference of those LSN is very small,\n> > > they can decide to increase wal_keep_segments or max_slot_wal_keep_size,\n> > > to prevent required WAL files from being removed. Thought?\n> > \n> > +1. This sounds like a good and useful stat for users.\n> \n> +1 for showing a number that is not involving lastRemovedSegNo. It is\n> like returning to the initial version of this patch. It showed a\n> number like ((the suggested above) minus restart_lsn). The number is\n> different for each slot so they fit in the view.\n> \n> The number is usable for the same purpose so I'm ok with it.\n\nI think we should publish the value from wal_keep_segments separately\nfrom max_slot_wal_keep_size. ISTM that the user might decide to change\nor remove wal_keep_segments and be suddenly at risk of losing slots\nbecause of overlooking that it was wal_keep_segments, not\nmax_slot_wal_keep_size, that was protecting them.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jun 2020 19:39:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/23 15:27, Amit Kapila wrote:\n> On Tue, Jun 23, 2020 at 7:47 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/06/23 10:10, Kyotaro Horiguchi wrote:\n>>> At Mon, 22 Jun 2020 22:02:51 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>>\n>>>>> I feel such a function is good to have but I am not sure if there is a\n>>>>> need to tie it with the removal of min_safe_lsn column.\n>>>>\n>>>> We should expose the LSN calculated from\n>>>> \"the current WAL LSN - max(wal_keep_segments * 16MB,\n>>>> max_slot_wal_keep_size)\"?\n>>>> This indicates the minimum LSN of WAL files that are guaraneed to be\n>>>> currently retained by wal_keep_segments and max_slot_wal_keep_size.\n>>>> That is, if checkpoint occurs when restart_lsn of replication slot is\n>>>> smaller than that minimum LSN, some required WAL files may be removed.\n>>>> So DBAs can periodically monitor and compare restart_lsn and that\n>>>> minimum\n>>>> LSN. If they see frequently that difference of those LSN is very\n>>>> small,\n>>>> they can decide to increase wal_keep_segments or\n>>>> max_slot_wal_keep_size,\n>>>> to prevent required WAL files from being removed. Thought?\n>>>\n>>> I'm not sure about the consensus here about showing that number in the\n>>> view. It is similar to \"remain\" in the earlier versions of this patch\n>>> but a bit simpler. It would be usable in a similar way. I can live\n>>> with either numbers.\n>>\n>> It's useless to display this value in each replication slot in the view.\n>> I'm thinking to expose it as a function.\n>>\n> \n> Having a separate function for this seems like a good idea but can we\n> consider displaying it in a view like pg_stat_replication_slots as we\n> are discussing a nearby thread to have such a view for other things.\n> I think ultimately this information is required to check whether some\n> slot can be invalidated or not, so having it displayed along with\n> other slot information might not be a bad idea.\n\n\"the current WAL LSN - max(wal_keep_segments * 16MB, max_slot_wal_keep_size)\"\nis the same value between all the replication slots. But you think it's better\nto display that same value for every slots in the view?\n\nOr you're thinking to display the difference of that LSN value and\nrestart_lsn as Horiguchi-san suggested? That diff varies each replication slot,\nso it seems ok to display it for every rows.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 24 Jun 2020 18:07:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/24 8:39, Alvaro Herrera wrote:\n> On 2020-Jun-23, Kyotaro Horiguchi wrote:\n> \n>> At Tue, 23 Jun 2020 11:50:34 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n>>> On Mon, Jun 22, 2020 at 6:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>>>> We should expose the LSN calculated from\n>>>> \"the current WAL LSN - max(wal_keep_segments * 16MB, max_slot_wal_keep_size)\"?\n>>>> This indicates the minimum LSN of WAL files that are guaraneed to be\n>>>> currently retained by wal_keep_segments and max_slot_wal_keep_size.\n>>>> That is, if checkpoint occurs when restart_lsn of replication slot is\n>>>> smaller than that minimum LSN, some required WAL files may be removed.\n>>>>\n>>>> So DBAs can periodically monitor and compare restart_lsn and that minimum\n>>>> LSN. If they see frequently that difference of those LSN is very small,\n>>>> they can decide to increase wal_keep_segments or max_slot_wal_keep_size,\n>>>> to prevent required WAL files from being removed. Thought?\n>>>\n>>> +1. This sounds like a good and useful stat for users.\n>>\n>> +1 for showing a number that is not involving lastRemovedSegNo. It is\n>> like returning to the initial version of this patch. It showed a\n>> number like ((the suggested above) minus restart_lsn). The number is\n>> different for each slot so they fit in the view.\n>>\n>> The number is usable for the same purpose so I'm ok with it.\n> \n> I think we should publish the value from wal_keep_segments separately\n> from max_slot_wal_keep_size. ISTM that the user might decide to change\n> or remove wal_keep_segments and be suddenly at risk of losing slots\n> because of overlooking that it was wal_keep_segments, not\n> max_slot_wal_keep_size, that was protecting them.\n\nYou mean to have two functions that returns\n\n1. \"current WAL LSN - wal_keep_segments * 16MB\"\n2. \"current WAL LSN - max_slot_wal_keep_size\"\n\nRight?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 24 Jun 2020 18:09:06 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Wed, Jun 24, 2020 at 2:37 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/23 15:27, Amit Kapila wrote:\n> >\n> > Having a separate function for this seems like a good idea but can we\n> > consider displaying it in a view like pg_stat_replication_slots as we\n> > are discussing a nearby thread to have such a view for other things.\n> > I think ultimately this information is required to check whether some\n> > slot can be invalidated or not, so having it displayed along with\n> > other slot information might not be a bad idea.\n>\n> \"the current WAL LSN - max(wal_keep_segments * 16MB, max_slot_wal_keep_size)\"\n> is the same value between all the replication slots. But you think it's better\n> to display that same value for every slots in the view?\n>\n> Or you're thinking to display the difference of that LSN value and\n> restart_lsn as Horiguchi-san suggested?\n>\n\nI see value in Horiguchi-San's proposal. IIUC, it will tell help\nDBAs/Users to know if any particular slot will get invalidated soon.\n\n> That diff varies each replication slot,\n> so it seems ok to display it for every rows.\n>\n\nYes.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jun 2020 18:19:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jun-24, Fujii Masao wrote:\n\n> On 2020/06/24 8:39, Alvaro Herrera wrote:\n\n> > I think we should publish the value from wal_keep_segments separately\n> > from max_slot_wal_keep_size. ISTM that the user might decide to change\n> > or remove wal_keep_segments and be suddenly at risk of losing slots\n> > because of overlooking that it was wal_keep_segments, not\n> > max_slot_wal_keep_size, that was protecting them.\n> \n> You mean to have two functions that returns\n> \n> 1. \"current WAL LSN - wal_keep_segments * 16MB\"\n> 2. \"current WAL LSN - max_slot_wal_keep_size\"\n\nHmm, but all the values there are easily findable. What would be the\npoint in repeating it?\n\nMaybe we should disregard this line of thinking and go back to\nHoriguchi-san's original proposal, to wit use the \"distance to\nbreakage\", as also supported now by Amit Kapila[1] (unless I\nmisunderstand him).\n\n[1] https://postgr.es/m/CAA4eK1L2oJ7T1cESdc5w4J9L3Q_hhvWqTigdAXKfnsJy4=v13w@mail.gmail.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jun 2020 11:15:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Wed, Jun 24, 2020 at 8:45 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jun-24, Fujii Masao wrote:\n>\n> > On 2020/06/24 8:39, Alvaro Herrera wrote:\n>\n> > > I think we should publish the value from wal_keep_segments separately\n> > > from max_slot_wal_keep_size. ISTM that the user might decide to change\n> > > or remove wal_keep_segments and be suddenly at risk of losing slots\n> > > because of overlooking that it was wal_keep_segments, not\n> > > max_slot_wal_keep_size, that was protecting them.\n> >\n> > You mean to have two functions that returns\n> >\n> > 1. \"current WAL LSN - wal_keep_segments * 16MB\"\n> > 2. \"current WAL LSN - max_slot_wal_keep_size\"\n>\n> Hmm, but all the values there are easily findable. What would be the\n> point in repeating it?\n>\n> Maybe we should disregard this line of thinking and go back to\n> Horiguchi-san's original proposal, to wit use the \"distance to\n> breakage\", as also supported now by Amit Kapila[1] (unless I\n> misunderstand him).\n>\n\n+1. I also think let's drop the idea of exposing a function for this\nvalue and revert the min_safe_lsn part of the work as proposed by\nMichael above [1] excluding the function pg_wal_oldest_lsn() in that\npatch. Then, we can expose this as a new stat for PG14. I feel it\nwould be better to display this stat in a new view (something like\npg_stat_replication_slots) as discussed in another thread [2]. Does\nthat make sense?\n\n[1] - https://www.postgresql.org/message-id/20200622054950.GC50978%40paquier.xyz\n[2] - https://www.postgresql.org/message-id/CA%2Bfd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jun 2020 15:50:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jun-25, Amit Kapila wrote:\n\n> +1. I also think let's drop the idea of exposing a function for this\n> value and revert the min_safe_lsn part of the work as proposed by\n> Michael above [1] excluding the function pg_wal_oldest_lsn() in that\n> patch. Then, we can expose this as a new stat for PG14. I feel it\n> would be better to display this stat in a new view (something like\n> pg_stat_replication_slots) as discussed in another thread [2]. Does\n> that make sense?\n\nI don't understand the proposal. Michael posted a patch that adds\npg_wal_oldest_lsn(), and you say we should apply the patch except the\npart that adds that function -- so what part would be applying?\n\nIf the proposal is to apply just the hunk in pg_get_replication_slots\nthat removes min_safe_lsn, and do nothing else in pg13, then I don't like\nit. The feature exposes a way to monitor slots w.r.t. the maximum slot\nsize; I'm okay if you prefer to express that in a different way, but I\ndon't like the idea of shipping pg13 without any way to monitor it.\n\nAs reported by Masao-san, the current min_safe_lsn has a definitional\nproblem when used with pg_walfile_name(), but we've established that\nthat's because pg_walfile_name() has a special-case definition, not\nbecause min_safe_lsn itself is bogus. If we're looking for a minimal\nchange that can fix this problem, let's increment one byte, which should\nfix that issue, no?\n\nI also see that some people complain that all slots return the same\nvalue and therefore this column is redundant. To that argument I say\nthat it's not unreasonable that we'll add a slot-specific size limit;\nand if we do, we'll be happy we had slot-specific min safe LSN; see e.g.\nhttps://postgr.es/m/20170301160610.wc7ez3vihmialntd@alap3.anarazel.de\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Jun 2020 11:24:27 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Thu, Jun 25, 2020 at 11:24:27AM -0400, Alvaro Herrera wrote:\n> I don't understand the proposal. Michael posted a patch that adds\n> pg_wal_oldest_lsn(), and you say we should apply the patch except the\n> part that adds that function -- so what part would be applying?\n\nI have sent last week a patch about only the removal of min_safe_lsn:\nhttps://www.postgresql.org/message-id/20200619121552.GH453547@paquier.xyz\nSo this applies to this part.\n\n> If the proposal is to apply just the hunk in pg_get_replication_slots\n> that removes min_safe_lsn, and do nothing else in pg13, then I don't like\n> it. The feature exposes a way to monitor slots w.r.t. the maximum slot\n> size; I'm okay if you prefer to express that in a different way, but I\n> don't like the idea of shipping pg13 without any way to monitor it.\n\nFrom what I can see, it seems to me that we have a lot of views of how\nto tackle the matter. That gives an idea that we are not really happy\nwith the current state of things, and usually a sign that we may want\nto redesign it, going back to this issue for v14.\n\nMy 2c.\n--\nMichael",
"msg_date": "Fri, 26 Jun 2020 07:53:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jun-26, Michael Paquier wrote:\n\n> On Thu, Jun 25, 2020 at 11:24:27AM -0400, Alvaro Herrera wrote:\n> > I don't understand the proposal. Michael posted a patch that adds\n> > pg_wal_oldest_lsn(), and you say we should apply the patch except the\n> > part that adds that function -- so what part would be applying?\n> \n> I have sent last week a patch about only the removal of min_safe_lsn:\n> https://www.postgresql.org/message-id/20200619121552.GH453547@paquier.xyz\n> So this applies to this part.\n\nWell, I oppose that because it leaves us with no way to monitor slot\nlimits. In his opening email, Masao-san proposed to simply change the\nvalue by adding 1. How you go from adding 1 to a column to removing\nthe column completely with no recourse, is beyond me.\n\nLet me summarize the situation and possible ways forward as I see them.\nIf I'm mistaken, please correct me.\n\nProblems:\ni) pg_replication_slot.min_safe_lsn has a weird definition in that all\n replication slots show the same value\nii) min_safe_lsn cannot be used with pg_walfile_name, because it returns\n the name of the previous segment.\n\nProposed solutions:\n\na) Do nothing -- keep the min_safe_lsn column as is. Warn users that\n pg_walfile_name should not be used with this column.\nb) Redefine min_safe_lsn to be lsn+1, so that pg_walfile_name can be used\n and return a useful value.\nc) Remove min_safe_lsn; add functions that expose the same value\nd) Remove min_safe_lsn; add a new view that exposes the same value and\n possibly others\ne) Replace min_safe_lsn with a \"distance\" column, which reports\n restart_lsn - oldest valid LSN\n (Note that you no longer have an LSN in this scenario, so you can't\n call pg_walfile_name.)\n\nThe original patch implemented (e); it was changed to its current\ndefinition because of this[1] comment. My proposal is to put it back.\n\n[1] https://postgr.es/m/20171106132050.6apzynxrqrzghb4r@alap3.anarazel.de\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Jun 2020 19:24:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Fri, Jun 26, 2020 at 4:54 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jun-26, Michael Paquier wrote:\n>\n> > On Thu, Jun 25, 2020 at 11:24:27AM -0400, Alvaro Herrera wrote:\n> > > I don't understand the proposal. Michael posted a patch that adds\n> > > pg_wal_oldest_lsn(), and you say we should apply the patch except the\n> > > part that adds that function -- so what part would be applying?\n> >\n> > I have sent last week a patch about only the removal of min_safe_lsn:\n> > https://www.postgresql.org/message-id/20200619121552.GH453547@paquier.xyz\n> > So this applies to this part.\n>\n> Well, I oppose that because it leaves us with no way to monitor slot\n> limits. In his opening email, Masao-san proposed to simply change the\n> value by adding 1. How you go from adding 1 to a column to removing\n> the column completely with no recourse, is beyond me.\n>\n> Let me summarize the situation and possible ways forward as I see them.\n> If I'm mistaken, please correct me.\n>\n> Problems:\n> i) pg_replication_slot.min_safe_lsn has a weird definition in that all\n> replication slots show the same value\n>\n\nIt is also not clear how the user can make use of that value?\n\n> ii) min_safe_lsn cannot be used with pg_walfile_name, because it returns\n> the name of the previous segment.\n>\n> Proposed solutions:\n>\n> a) Do nothing -- keep the min_safe_lsn column as is. Warn users that\n> pg_walfile_name should not be used with this column.\n> b) Redefine min_safe_lsn to be lsn+1, so that pg_walfile_name can be used\n> and return a useful value.\n> c) Remove min_safe_lsn; add functions that expose the same value\n> d) Remove min_safe_lsn; add a new view that exposes the same value and\n> possibly others\n>\n> e) Replace min_safe_lsn with a \"distance\" column, which reports\n> restart_lsn - oldest valid LSN\n> (Note that you no longer have an LSN in this scenario, so you can't\n> call pg_walfile_name.)\n>\n\nCan we consider an option to \"Remove min_safe_lsn; document how a user\ncan monitor the distance\"? We have a function to get current WAL\ninsert location and other things required are available either via\nview or as guc variable values. The reason I am thinking of this\noption is that it might be better to get some more feedback on what is\nthe most appropriate value to display. However, I am okay if we can\nreach a consensus on one of the above options.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jun 2020 10:15:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/26 13:45, Amit Kapila wrote:\n> On Fri, Jun 26, 2020 at 4:54 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>\n>> On 2020-Jun-26, Michael Paquier wrote:\n>>\n>>> On Thu, Jun 25, 2020 at 11:24:27AM -0400, Alvaro Herrera wrote:\n>>>> I don't understand the proposal. Michael posted a patch that adds\n>>>> pg_wal_oldest_lsn(), and you say we should apply the patch except the\n>>>> part that adds that function -- so what part would be applying?\n>>>\n>>> I have sent last week a patch about only the removal of min_safe_lsn:\n>>> https://www.postgresql.org/message-id/20200619121552.GH453547@paquier.xyz\n>>> So this applies to this part.\n>>\n>> Well, I oppose that because it leaves us with no way to monitor slot\n>> limits. In his opening email, Masao-san proposed to simply change the\n>> value by adding 1. How you go from adding 1 to a column to removing\n>> the column completely with no recourse, is beyond me.\n>>\n>> Let me summarize the situation and possible ways forward as I see them.\n>> If I'm mistaken, please correct me.\n>>\n>> Problems:\n>> i) pg_replication_slot.min_safe_lsn has a weird definition in that all\n>> replication slots show the same value\n>>\n> \n> It is also not clear how the user can make use of that value?\n> \n>> ii) min_safe_lsn cannot be used with pg_walfile_name, because it returns\n>> the name of the previous segment.\n>>\n>> Proposed solutions:\n>>\n>> a) Do nothing -- keep the min_safe_lsn column as is. Warn users that\n>> pg_walfile_name should not be used with this column.\n>> b) Redefine min_safe_lsn to be lsn+1, so that pg_walfile_name can be used\n>> and return a useful value.\n>> c) Remove min_safe_lsn; add functions that expose the same value\n>> d) Remove min_safe_lsn; add a new view that exposes the same value and\n>> possibly others\n>>\n>> e) Replace min_safe_lsn with a \"distance\" column, which reports\n>> restart_lsn - oldest valid LSN\n>> (Note that you no longer have an LSN in this scenario, so you can't\n>> call pg_walfile_name.)\n\nI like (e).\n\n> \n> Can we consider an option to \"Remove min_safe_lsn; document how a user\n> can monitor the distance\"? We have a function to get current WAL\n> insert location and other things required are available either via\n> view or as guc variable values. The reason I am thinking of this\n> option is that it might be better to get some more feedback on what is\n> the most appropriate value to display. However, I am okay if we can\n> reach a consensus on one of the above options.\n\nYes, that's an idea. But it might not be easy to calculate that distance\nmanually by subtracting max_slot_wal_keep_size from the current LSN.\nBecause we've not supported -(pg_lsn, numeric) operator yet. I'm\nproposing that operator, but it's for v14.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 30 Jun 2020 17:07:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "\n\nOn 2020/06/30 17:07, Fujii Masao wrote:\n> \n> \n> On 2020/06/26 13:45, Amit Kapila wrote:\n>> On Fri, Jun 26, 2020 at 4:54 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>>\n>>> On 2020-Jun-26, Michael Paquier wrote:\n>>>\n>>>> On Thu, Jun 25, 2020 at 11:24:27AM -0400, Alvaro Herrera wrote:\n>>>>> I don't understand the proposal. Michael posted a patch that adds\n>>>>> pg_wal_oldest_lsn(), and you say we should apply the patch except the\n>>>>> part that adds that function -- so what part would be applying?\n>>>>\n>>>> I have sent last week a patch about only the removal of min_safe_lsn:\n>>>> https://www.postgresql.org/message-id/20200619121552.GH453547@paquier.xyz\n>>>> So this applies to this part.\n>>>\n>>> Well, I oppose that because it leaves us with no way to monitor slot\n>>> limits. In his opening email, Masao-san proposed to simply change the\n>>> value by adding 1. How you go from adding 1 to a column to removing\n>>> the column completely with no recourse, is beyond me.\n>>>\n>>> Let me summarize the situation and possible ways forward as I see them.\n>>> If I'm mistaken, please correct me.\n>>>\n>>> Problems:\n>>> i) pg_replication_slot.min_safe_lsn has a weird definition in that all\n>>> replication slots show the same value\n>>>\n>>\n>> It is also not clear how the user can make use of that value?\n>>\n>>> ii) min_safe_lsn cannot be used with pg_walfile_name, because it returns\n>>> the name of the previous segment.\n>>>\n>>> Proposed solutions:\n>>>\n>>> a) Do nothing -- keep the min_safe_lsn column as is. Warn users that\n>>> pg_walfile_name should not be used with this column.\n>>> b) Redefine min_safe_lsn to be lsn+1, so that pg_walfile_name can be used\n>>> and return a useful value.\n>>> c) Remove min_safe_lsn; add functions that expose the same value\n>>> d) Remove min_safe_lsn; add a new view that exposes the same value and\n>>> possibly others\n>>>\n>>> e) Replace min_safe_lsn with a \"distance\" column, which reports\n>>> restart_lsn - oldest valid LSN\n>>> (Note that you no longer have an LSN in this scenario, so you can't\n>>> call pg_walfile_name.)\n> \n> I like (e).\n> \n>>\n>> Can we consider an option to \"Remove min_safe_lsn; document how a user\n>> can monitor the distance\"? We have a function to get current WAL\n>> insert location and other things required are available either via\n>> view or as guc variable values. The reason I am thinking of this\n>> option is that it might be better to get some more feedback on what is\n>> the most appropriate value to display. However, I am okay if we can\n>> reach a consensus on one of the above options.\n> \n> Yes, that's an idea. But it might not be easy to calculate that distance\n> manually by subtracting max_slot_wal_keep_size from the current LSN.\n> Because we've not supported -(pg_lsn, numeric) operator yet. I'm\n> proposing that operator, but it's for v14.\n\nSorry this is not true. That distance can be calculated without those operators.\nFor example,\n\nSELECT restart_lsn - pg_current_wal_lsn() + (SELECT setting::numeric * 1024 * 1024 FROM pg_settings WHERE name = 'max_slot_wal_keep_size') distance FROM pg_replication_slots;\n\nIf the calculated distance is small or negative value, which means that\nwe may lose some required WAL files. So in this case it's worth considering\nto increase max_slot_wal_keep_size.\n\nI still think it's better and more helpful to display something like\nthat distance in pg_replication_slots rather than making each user\ncalculate it...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 30 Jun 2020 23:23:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jun-30, Fujii Masao wrote:\n\n> Sorry this is not true. That distance can be calculated without those operators.\n> For example,\n> \n> SELECT restart_lsn - pg_current_wal_lsn() + (SELECT setting::numeric * 1024 * 1024 FROM pg_settings WHERE name = 'max_slot_wal_keep_size') distance FROM pg_replication_slots;\n> \n> If the calculated distance is small or negative value, which means that\n> we may lose some required WAL files. So in this case it's worth considering\n> to increase max_slot_wal_keep_size.\n\n... OK, but you're forgetting wal_keep_segments.\n\n> I still think it's better and more helpful to display something like\n> that distance in pg_replication_slots rather than making each user\n> calculate it...\n\nAgreed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 30 Jun 2020 14:09:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Tue, 30 Jun 2020 23:23:30 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> >> Can we consider an option to \"Remove min_safe_lsn; document how a user\n> >> can monitor the distance\"? We have a function to get current WAL\n> >> insert location and other things required are available either via\n> >> view or as guc variable values. The reason I am thinking of this\n> >> option is that it might be better to get some more feedback on what is\n> >> the most appropriate value to display. However, I am okay if we can\n> >> reach a consensus on one of the above options.\n> > Yes, that's an idea. But it might not be easy to calculate that\n> > distance\n> > manually by subtracting max_slot_wal_keep_size from the current LSN.\n> > Because we've not supported -(pg_lsn, numeric) operator yet. I'm\n> > proposing that operator, but it's for v14.\n> \n> Sorry this is not true. That distance can be calculated without those\n> operators.\n> For example,\n> \n> SELECT restart_lsn - pg_current_wal_lsn() + (SELECT setting::numeric *\n> 1024 * 1024 FROM pg_settings WHERE name = 'max_slot_wal_keep_size')\n> distance FROM pg_replication_slots;\n\nIt's an approximation with accuracy of segment size. The calculation\nwould be not that simple because of the unit of the calculation. The\nformula for the exact calculateion (ignoring wal_keep_segments) is:\n\ndistance = (seg_floor(restart_lsn) +\n seg_floor(max_slot_wal_keep_size) + 1) * wal_segment_size -\n\t\t\tcurrent_lsn\n\nwhere seg_floor is floor() by wal_segment_size.\n\nregards.\n\n> If the calculated distance is small or negative value, which means\n> that\n> we may lose some required WAL files. So in this case it's worth\n> considering\n> to increase max_slot_wal_keep_size.\n> \n> I still think it's better and more helpful to display something like\n> that distance in pg_replication_slots rather than making each user\n> calculate it...\n\nAgreed. The attached replaces min_safe_lsn with \"distance\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 01 Jul 2020 10:32:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Tue, Jun 30, 2020 at 7:53 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/30 17:07, Fujii Masao wrote:\n> >\n> >\n> > On 2020/06/26 13:45, Amit Kapila wrote:\n> >>\n> >> Can we consider an option to \"Remove min_safe_lsn; document how a user\n> >> can monitor the distance\"? We have a function to get current WAL\n> >> insert location and other things required are available either via\n> >> view or as guc variable values. The reason I am thinking of this\n> >> option is that it might be better to get some more feedback on what is\n> >> the most appropriate value to display. However, I am okay if we can\n> >> reach a consensus on one of the above options.\n> >\n> > Yes, that's an idea. But it might not be easy to calculate that distance\n> > manually by subtracting max_slot_wal_keep_size from the current LSN.\n> > Because we've not supported -(pg_lsn, numeric) operator yet. I'm\n> > proposing that operator, but it's for v14.\n>\n> Sorry this is not true. That distance can be calculated without those operators.\n> For example,\n>\n> SELECT restart_lsn - pg_current_wal_lsn() + (SELECT setting::numeric * 1024 * 1024 FROM pg_settings WHERE name = 'max_slot_wal_keep_size') distance FROM pg_replication_slots;\n>\n> If the calculated distance is small or negative value, which means that\n> we may lose some required WAL files. So in this case it's worth considering\n> to increase max_slot_wal_keep_size.\n>\n> I still think it's better and more helpful to display something like\n> that distance in pg_replication_slots rather than making each user\n> calculate it...\n>\n\nOkay, but do we think it is better to display this in\npg_replication_slots or some new view like pg_stat_*_slots as being\ndiscussed in [1]? It should not happen that we later decide to move\nthis or similar stats to that view.\n\n[1] - https://www.postgresql.org/message-id/CA%2Bfd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 Jul 2020 14:23:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-01, Amit Kapila wrote:\n\n> Okay, but do we think it is better to display this in\n> pg_replication_slots or some new view like pg_stat_*_slots as being\n> discussed in [1]? It should not happen that we later decide to move\n> this or similar stats to that view.\n\nIt seems that the main motivation for having some counters in another\nview is the ability to reset them; and resetting this distance value\nmakes no sense, so I think it's better to have it in\npg_replication_slots.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 Jul 2020 11:14:21 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Wed, Jul 01, 2020 at 11:14:21AM -0400, Alvaro Herrera wrote:\n> On 2020-Jul-01, Amit Kapila wrote:\n>> Okay, but do we think it is better to display this in\n>> pg_replication_slots or some new view like pg_stat_*_slots as being\n>> discussed in [1]? It should not happen that we later decide to move\n>> this or similar stats to that view.\n> \n> It seems that the main motivation for having some counters in another\n> view is the ability to reset them; and resetting this distance value\n> makes no sense, so I think it's better to have it in\n> pg_replication_slots.\n\npg_replication_slots would make sense to me than a stat view for a\ndistance column. Now, I have to admit that I am worried when seeing\ndesign discussions on this thread for 13 after beta2 has been shipped,\nso my vote would still be to remove for now the column in 13, document\nan equivalent query to do this work (I actually just do that in a\nbgworker monitoring repslot bloat now in some stuff I maintain\ninternally), and resend a patch in v14 to give the occasion for this\nfeature to go through one extra round of review. My 2c.\n--\nMichael",
"msg_date": "Thu, 2 Jul 2020 10:38:45 +0900",
"msg_from": "michael@paquier.xyz",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-02, michael@paquier.xyz wrote:\n\n> pg_replication_slots would make sense to me than a stat view for a\n> distance column. Now, I have to admit that I am worried when seeing\n> design discussions on this thread for 13 after beta2 has been shipped,\n\nWe already had this discussion and one of the things we said before\nbeta2 was \"we're still in beta2, there's time\". I see no need to panic.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 Jul 2020 21:46:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On Wed, Jul 1, 2020 at 8:44 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jul-01, Amit Kapila wrote:\n>\n> > Okay, but do we think it is better to display this in\n> > pg_replication_slots or some new view like pg_stat_*_slots as being\n> > discussed in [1]? It should not happen that we later decide to move\n> > this or similar stats to that view.\n>\n> It seems that the main motivation for having some counters in another\n> view is the ability to reset them; and resetting this distance value\n> makes no sense, so I think it's better to have it in\n> pg_replication_slots.\n>\n\nFair enough. It would be good if we can come up with something better\nthan 'distance' for this column. Some ideas safe_wal_limit,\nsafe_wal_size?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 4 Jul 2020 11:48:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-04, Amit Kapila wrote:\n\n> Fair enough. It would be good if we can come up with something better\n> than 'distance' for this column. Some ideas safe_wal_limit,\n> safe_wal_size?\n\nHmm, I like safe_wal_size.\n\nI've been looking at this intermittently since late last week and I\nintend to get it done in the next couple of days.\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 Jul 2020 11:29:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-06, Alvaro Herrera wrote:\n\n> Hmm, I like safe_wal_size.\n> \n> I've been looking at this intermittently since late last week and I\n> intend to get it done in the next couple of days.\n\nI propose the attached. This is pretty much what was proposed by\nKyotaro, but I made a couple of changes. Most notably, I moved the\ncalculation to the view code itself rather than creating a function in\nxlog.c, mostly because it seemed to me that the new function was\ncreating an abstraction leakage without adding any value; also, if we\nadd per-slot size limits later, it would get worse.\n\nThe other change was to report negative values when the slot becomes\nunreserved, rather than zero. It shows how much beyond safety your\nslots are getting, so it seems useful. Clamping at zero seems to serve\nno purpose.\n\nI also made it report null immediately when slots are in state lost.\nBut beware of slots that appear lost but fall in the unreserved category\nbecause they advanced before checkpointer signalled them. (This case\nrequires a debugger to hit ...)\n\n\nOne thing that got my attention while going over this is that the error\nmessage we throw when making a slot invalid is not very helpful; it\ndoesn't say what the current insertion LSN was at that point. Maybe we\nshould add that? (As a separate patch, of couse.)\n\nAny more thoughts? If not, I'll get this pushed tomorrow finally.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 6 Jul 2020 20:54:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "Thanks!\nAt Mon, 6 Jul 2020 20:54:36 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jul-06, Alvaro Herrera wrote:\n> \n> > Hmm, I like safe_wal_size.\n\nI agree to the name, too.\n\n> > I've been looking at this intermittently since late last week and I\n> > intend to get it done in the next couple of days.\n> \n> I propose the attached. This is pretty much what was proposed by\n> Kyotaro, but I made a couple of changes. Most notably, I moved the\n> calculation to the view code itself rather than creating a function in\n> xlog.c, mostly because it seemed to me that the new function was\n> creating an abstraction leakage without adding any value; also, if we\n> add per-slot size limits later, it would get worse.\n\nI'm not sure that detailed WAL segment calculation fits slotfuncs.c\nbut I don't object to the change. However if we do that:\n\n+\t\t\t/* determine how many segments slots can be kept by slots ... */\n+\t\t\tkeepSegs = max_slot_wal_keep_size_mb / (wal_segment_size / (1024 * 1024));\n\nCouldn't we move ConvertToXSegs from xlog.c to xlog_ingernals.h and\nuse it intead of the bare expression?\n\n\n> The other change was to report negative values when the slot becomes\n> unreserved, rather than zero. It shows how much beyond safety your\n> slots are getting, so it seems useful. Clamping at zero seems to serve\n> no purpose.\n\nThe reason for the clamping is the signedness of the values, or\nintegral promotion. However, I believe the calculation cannot go\nbeyond the range of signed long so the signedness conversion in the\npatch looks fine.\n\n> I also made it report null immediately when slots are in state lost.\n> But beware of slots that appear lost but fall in the unreserved category\n> because they advanced before checkpointer signalled them. (This case\n> requires a debugger to hit ...)\n\nOh! Okay, that change seems right to me.\n\n> One thing that got my attention while going over this is that the error\n> message we throw when making a slot invalid is not very helpful; it\n> doesn't say what the current insertion LSN was at that point. Maybe we\n> should add that? (As a separate patch, of couse.)\n\nIt sounds helpful to me. (I remember that I sometime want to see\ncheckpoint LSNs in server log..)\n\n> Any more thoughts? If not, I'll get this pushed tomorrow finally.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Jul 2020 10:55:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-07, Kyotaro Horiguchi wrote:\n\n> At Mon, 6 Jul 2020 20:54:36 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n\n> > I propose the attached. This is pretty much what was proposed by\n> > Kyotaro, but I made a couple of changes. Most notably, I moved the\n> > calculation to the view code itself rather than creating a function in\n> > xlog.c, mostly because it seemed to me that the new function was\n> > creating an abstraction leakage without adding any value; also, if we\n> > add per-slot size limits later, it would get worse.\n> \n> I'm not sure that detailed WAL segment calculation fits slotfuncs.c\n> but I don't object to the change. However if we do that:\n> \n> +\t\t\t/* determine how many segments slots can be kept by slots ... */\n> +\t\t\tkeepSegs = max_slot_wal_keep_size_mb / (wal_segment_size / (1024 * 1024));\n> \n> Couldn't we move ConvertToXSegs from xlog.c to xlog_ingernals.h and\n> use it intead of the bare expression?\n\nI was of two minds about that, and the only reason I didn't do it is\nthat we'll need to give it a better name if we do it ... I'm open to\nsuggestions.\n\n> > The other change was to report negative values when the slot becomes\n> > unreserved, rather than zero. It shows how much beyond safety your\n> > slots are getting, so it seems useful. Clamping at zero seems to serve\n> > no purpose.\n> \n> The reason for the clamping is the signedness of the values, or\n> integral promotion. However, I believe the calculation cannot go\n> beyond the range of signed long so the signedness conversion in the\n> patch looks fine.\n\nYeah, I think the negative values are useful to see. I think if you\never get close to 2^62, you're in much more serious trouble anyway :-)\nBut I don't deny that the math there could be subject of overflow\nissues. If you want to verify, please be my guest ...\n\n> > One thing that got my attention while going over this is that the error\n> > message we throw when making a slot invalid is not very helpful; it\n> > doesn't say what the current insertion LSN was at that point. Maybe we\n> > should add that? (As a separate patch, of couse.)\n> \n> It sounds helpful to me. (I remember that I sometime want to see\n> checkpoint LSNs in server log..)\n\nHmm, ... let's do that for pg14!\n\nThanks for looking,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 Jul 2020 23:01:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-06, Alvaro Herrera wrote:\n\n> On 2020-Jul-07, Kyotaro Horiguchi wrote:\n\n> > Couldn't we move ConvertToXSegs from xlog.c to xlog_ingernals.h and\n> > use it intead of the bare expression?\n> \n> I was of two minds about that, and the only reason I didn't do it is\n> that we'll need to give it a better name if we do it ... I'm open to\n> suggestions.\n\nIn absence of other suggestions I gave this the name XLogMBVarToSegs,\nand redefined ConvertToXSegs to use that. Didn't touch callers in\nxlog.c to avoid pointless churn. Pushed to both master and 13.\n\nI hope this satisfies everyone ... Masao-san, thanks for reporting the\nproblem, and thanks Horiguchi-san for providing the fix. (Also thanks\nto Amit and Michael for discussion.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Jul 2020 13:48:00 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> In absence of other suggestions I gave this the name XLogMBVarToSegs,\n> and redefined ConvertToXSegs to use that. Didn't touch callers in\n> xlog.c to avoid pointless churn. Pushed to both master and 13.\n\nThe buildfarm's sparc64 members seem unhappy with this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jul 2020 13:55:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-08, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > In absence of other suggestions I gave this the name XLogMBVarToSegs,\n> > and redefined ConvertToXSegs to use that. Didn't touch callers in\n> > xlog.c to avoid pointless churn. Pushed to both master and 13.\n> \n> The buildfarm's sparc64 members seem unhappy with this.\n\nHmm. Some of them are, yeah, but it's not universal. For example\nmussurana and ibisbill are not showing failures.\n\nAnyway the error is pretty strange: only GetWALAvailability is showing a\nproblem, but the size calculation in the view function def is returning\na negative number, as expected.\n\nSo looking at the code in GetWALAvailability, what happens is that\ntargetSeg >= oldestSlotSeg, but we expect the opposite. I'd bet for\ntargetSeg to be correct, since its input is just the slot LSN -- pretty\neasy. But for oldestSlotSeg, we have KeepLogSeg involved.\n\nNo immediate ideas ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jul 2020 19:07:57 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jul-08, Tom Lane wrote:\n>> The buildfarm's sparc64 members seem unhappy with this.\n\n> Hmm. Some of them are, yeah, but it's not universal. For example\n> mussurana and ibisbill are not showing failures.\n\nAh, right, I was thinking they hadn't run since this commit, but they\nhave.\n\n> Anyway the error is pretty strange: only GetWALAvailability is showing a\n> problem, but the size calculation in the view function def is returning\n> a negative number, as expected.\n\nWe've previously noted what seem to be compiler optimization bugs on\nboth sparc32 and sparc64; the latest thread about that is\nhttps://www.postgresql.org/message-id/flat/f28f842d-e82b-4e30-a81a-2a1f9fa4a8e1%40www.fastmail.com\n\nThis is looking uncomfortably like the same thing. Tom, could you\nexperiment with different -O levels on those animals?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jul 2020 19:24:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-08, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> > Anyway the error is pretty strange: only GetWALAvailability is showing a\n> > problem, but the size calculation in the view function def is returning\n> > a negative number, as expected.\n> \n> We've previously noted what seem to be compiler optimization bugs on\n> both sparc32 and sparc64; the latest thread about that is\n> https://www.postgresql.org/message-id/flat/f28f842d-e82b-4e30-a81a-2a1f9fa4a8e1%40www.fastmail.com\n> \n> This is looking uncomfortably like the same thing.\n\nOuch. So 12 builds with -O0 but 13 does not? Did we do something to\nsequence.c to work around this problem? I cannot find anything.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jul 2020 19:35:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jul-08, Tom Lane wrote:\n>> We've previously noted what seem to be compiler optimization bugs on\n>> both sparc32 and sparc64; the latest thread about that is\n>> https://www.postgresql.org/message-id/flat/f28f842d-e82b-4e30-a81a-2a1f9fa4a8e1%40www.fastmail.com\n>> This is looking uncomfortably like the same thing.\n\n> Ouch. So 12 builds with -O0 but 13 does not?\n\nUnless Tom's changed the animal's config since that thread, yes.\n\n> Did we do something to\n> sequence.c to work around this problem? I cannot find anything.\n\nWe did not. If it's a compiler bug, and one as phase-of-the-moon-\ndependent as this seems to be, I'd have zero confidence that any\nspecific source code change would fix it (barring someone confidently\nexplaining exactly what the compiler bug is, anyway). The best we\ncan do for now is hope that backing off the -O level avoids the bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jul 2020 20:35:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-08, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> > Ouch. So 12 builds with -O0 but 13 does not?\n> \n> Unless Tom's changed the animal's config since that thread, yes.\n\nI verified the configs in branches 12 and 13 in one of the failing\nanimals, and yes that's still the case.\n\n> > Did we do something to\n> > sequence.c to work around this problem? I cannot find anything.\n> \n> We did not. If it's a compiler bug, and one as phase-of-the-moon-\n> dependent as this seems to be, I'd have zero confidence that any\n> specific source code change would fix it (barring someone confidently\n> explaining exactly what the compiler bug is, anyway). The best we\n> can do for now is hope that backing off the -O level avoids the bug.\n\nAn easy workaround might be to add -O0 for that platform in that\ndirectory's Makefile.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jul 2020 20:53:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jul-08, Tom Lane wrote:\n>> We did not. If it's a compiler bug, and one as phase-of-the-moon-\n>> dependent as this seems to be, I'd have zero confidence that any\n>> specific source code change would fix it (barring someone confidently\n>> explaining exactly what the compiler bug is, anyway). The best we\n>> can do for now is hope that backing off the -O level avoids the bug.\n\n> An easy workaround might be to add -O0 for that platform in that\n> directory's Makefile.\n\n\"Back off the -O level in one directory\" seems about as misguided as\n\"back off the -O level in one branch\", if you ask me. There's no\nreason to suppose that the problem won't bite us somewhere else next\nweek.\n\nThe previous sparc32 bug that we'd made some effort to run to ground\nis described here:\nhttps://www.postgresql.org/message-id/15142.1498165769@sss.pgh.pa.us\nWe really don't know what aspects of the source code trigger that.\nI'm slightly suspicious that we might be seeing the same bug in the\nsparc64 builds, but it's just a guess.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jul 2020 21:03:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "... or on the other hand, maybe these animals are just showing more\nsensitivity than others to an actual code bug. skink is showing\nvalgrind failures in this very area, on both HEAD and v13:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2020-07-08%2021%3A13%3A02\n\n==3166208== VALGRINDERROR-BEGIN\n==3166208== Conditional jump or move depends on uninitialised value(s)\n==3166208== at 0x28618D: KeepLogSeg (xlog.c:9627)\n==3166208== by 0x296AC5: GetWALAvailability (xlog.c:9533)\n==3166208== by 0x4FFECB: pg_get_replication_slots (slotfuncs.c:356)\n==3166208== by 0x3C762F: ExecMakeTableFunctionResult (execSRF.c:234)\n==3166208== by 0x3D9A9A: FunctionNext (nodeFunctionscan.c:95)\n==3166208== by 0x3C81D6: ExecScanFetch (execScan.c:133)\n==3166208== by 0x3C81D6: ExecScan (execScan.c:199)\n==3166208== by 0x3D99A9: ExecFunctionScan (nodeFunctionscan.c:270)\n==3166208== by 0x3C5072: ExecProcNodeFirst (execProcnode.c:450)\n==3166208== by 0x3BD35E: ExecProcNode (executor.h:245)\n==3166208== by 0x3BD35E: ExecutePlan (execMain.c:1646)\n==3166208== by 0x3BE039: standard_ExecutorRun (execMain.c:364)\n==3166208== by 0x3BE102: ExecutorRun (execMain.c:308)\n==3166208== by 0x559669: PortalRunSelect (pquery.c:912)\n==3166208== Uninitialised value was created by a stack allocation\n==3166208== at 0x296A84: GetWALAvailability (xlog.c:9523)\n==3166208== \n==3166208== VALGRINDERROR-END\n\nand even the most cursory look at the code confirms that there's a\nreal bug here. KeepLogSeg expects *logSegNo to be defined on entry,\nbut GetWALAvailability hasn't bothered to initialize oldestSlotSeg.\nIt is not clear to me which one is in the wrong; the comment for\nKeepLogSeg isn't particularly clear on this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jul 2020 17:56:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-09, Tom Lane wrote:\n\n> and even the most cursory look at the code confirms that there's a\n> real bug here. KeepLogSeg expects *logSegNo to be defined on entry,\n> but GetWALAvailability hasn't bothered to initialize oldestSlotSeg.\n> It is not clear to me which one is in the wrong; the comment for\n> KeepLogSeg isn't particularly clear on this.\n\nOh, so I introduced the bug when I removed the initialization in this\nfix. That one was using the wrong datatype, but evidently it achieved\nthe right effect. And KeepLogSeg is using the wrong datatype Invalid\nmacro also.\n\nI think we should define InvalidXLogSegNo to be ~((uint64)0) and add a\nmacro to test for that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jul 2020 18:20:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-09, Alvaro Herrera wrote:\n\n> I think we should define InvalidXLogSegNo to be ~((uint64)0) and add a\n> macro to test for that.\n\nThat's overkill really. I just used zero. Running\ncontrib/test_decoding under valgrind, this now passes.\n\nI think I'd rather do away with the compare to zero, and initialize to\nsomething else in GetWALAvailability, though. What we're doing seems\nunclean and unclear.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 10 Jul 2020 19:40:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jul-09, Alvaro Herrera wrote:\n>> I think we should define InvalidXLogSegNo to be ~((uint64)0) and add a\n>> macro to test for that.\n\n> That's overkill really. I just used zero. Running\n> contrib/test_decoding under valgrind, this now passes.\n\n> I think I'd rather do away with the compare to zero, and initialize to\n> something else in GetWALAvailability, though. What we're doing seems\n> unclean and unclear.\n\nIs zero really not a valid segment number?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jul 2020 10:27:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-11, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jul-09, Alvaro Herrera wrote:\n> >> I think we should define InvalidXLogSegNo to be ~((uint64)0) and add a\n> >> macro to test for that.\n> \n> > That's overkill really. I just used zero. Running\n> > contrib/test_decoding under valgrind, this now passes.\n> \n> > I think I'd rather do away with the compare to zero, and initialize to\n> > something else in GetWALAvailability, though. What we're doing seems\n> > unclean and unclear.\n> \n> Is zero really not a valid segment number?\n\nNo, but you cannot retreat from that ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 11 Jul 2020 17:26:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "A much more sensible answer is to initialize the segno to the segment\ncurrently being written, as in the attached.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 13 Jul 2020 13:23:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "On 2020-Jul-13, Alvaro Herrera wrote:\n\n> A much more sensible answer is to initialize the segno to the segment\n> currently being written, as in the attached.\n\nRan the valgrind test locally and it passes. Pushed it now.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 Jul 2020 13:52:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
},
{
"msg_contents": "At Mon, 13 Jul 2020 13:52:12 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jul-13, Alvaro Herrera wrote:\n> \n> > A much more sensible answer is to initialize the segno to the segment\n> > currently being written, as in the attached.\n> \n> Ran the valgrind test locally and it passes. Pushed it now.\n\n-\tif (XLogRecPtrIsInvalid(*logSegNo) || segno < *logSegNo)\n+\tif (segno < *logSegNo)\n\nOops! Thank you for fixing it!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 14 Jul 2020 13:38:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min_safe_lsn column in pg_replication_slots view"
}
] |
[
{
"msg_contents": "Adjacent to the discussion in [0] I wanted to document the factorial() \nfunction and expand the tests for that slightly with some edge cases.\n\nI noticed that the current implementation returns 1 for the factorial of \nall negative numbers:\n\nSELECT factorial(-4);\n factorial\n-----------\n 1\n\nWhile there are some advanced mathematical constructions that define \nfactorials for negative numbers, they certainly produce different \nanswers than this.\n\nCuriously, before the reimplementation of factorial using numeric \n(04a4821adef38155b7920ba9eb83c4c3c29156f8), it returned 0 for negative \nnumbers, which is also not correct by any theory I could find.\n\nI propose to change this to error out for negative numbers.\n\nSee attached patches for test and code changes.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/38ca86db-42ab-9b48-2902-337a0d6b8311%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 15 Jun 2020 09:11:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "factorial of negative numbers"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 12:41 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> Adjacent to the discussion in [0] I wanted to document the factorial()\n> function and expand the tests for that slightly with some edge cases.\n>\n> I noticed that the current implementation returns 1 for the factorial of\n> all negative numbers:\n>\n> SELECT factorial(-4);\n> factorial\n> -----------\n> 1\n>\n> While there are some advanced mathematical constructions that define\n> factorials for negative numbers, they certainly produce different\n> answers than this.\n>\n> Curiously, before the reimplementation of factorial using numeric\n> (04a4821adef38155b7920ba9eb83c4c3c29156f8), it returned 0 for negative\n> numbers, which is also not correct by any theory I could find.\n>\n> I propose to change this to error out for negative numbers.\n\n+1.\nHere are some comments\nI see below in the .out but there's not corresponding SQL in .sql.\n+SELECT factorial(-4);\n+ factorial\n+-----------\n+ 1\n+(1 row)\n+\n\nShould we also add -4! to cover both function as well as the operator?\n\n+ if (num < 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n\nThis looks more of ERRCODE_FEATURE_NOT_SUPPORTED esp. since factorial\nof negative numbers is defined but we are not supporting it. I looked\nat some other usages of this error code. All of them are really are\nout of range value errors.\n\nOtherwise the patches look good to me.\n\n\n",
"msg_date": "Mon, 15 Jun 2020 16:45:25 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Adjacent to the discussion in [0] I wanted to document the factorial() \n> function and expand the tests for that slightly with some edge cases.\n> ...\n> I propose to change this to error out for negative numbers.\n\n+1 for all of this, with a couple trivial nitpicks about the error\nchanges:\n\n* I'd have written the error as \"factorial of a negative number is\nundefined\" ... not sure what a grammar stickler would say about it,\nbut that seems more natural to me.\n\n* I'd leave the \"if (num <= 1)\" test after the error check as-is;\nit's probably a shade cheaper than \"if (num == 0 || num == 1)\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jun 2020 09:59:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "... oh, one slightly more important nit-pick: per the catalogs and\ncode, the function is factorial(bigint):\n\n Schema | Name | Result data type | Argument data types | Type \n------------+-----------+------------------+---------------------+------\n pg_catalog | factorial | numeric | bigint | func\n\nbut you have it documented as factorial(numeric).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jun 2020 10:24:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On 2020-06-15 13:15, Ashutosh Bapat wrote:\n> Here are some comments\n> I see below in the .out but there's not corresponding SQL in .sql.\n> +SELECT factorial(-4);\n> + factorial\n> +-----------\n> + 1\n> +(1 row)\n> +\n> \n> Should we also add -4! to cover both function as well as the operator?\n\nI will add that. I wasn't actually sure about the precedence of these \noperators, so it is interesting as a regression test for that as well.\n\n> + if (num < 0)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> \n> This looks more of ERRCODE_FEATURE_NOT_SUPPORTED esp. since factorial\n> of negative numbers is defined but we are not supporting it. I looked\n> at some other usages of this error code. All of them are really are\n> out of range value errors.\n\nThe proposed error message says this is undefined. If we use an error \ncode that says it's not implemented, then the message should also \nreflect that. But that would in turn open an invitation for someone to \nimplement it, and I'm not sure we want that. It could go either way, \nbut we should be clear on what we want.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 05:18:25 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Tue, 16 Jun 2020 at 08:48, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-06-15 13:15, Ashutosh Bapat wrote:\n> > Here are some comments\n> > I see below in the .out but there's not corresponding SQL in .sql.\n> > +SELECT factorial(-4);\n> > + factorial\n> > +-----------\n> > + 1\n> > +(1 row)\n> > +\n> >\n> > Should we also add -4! to cover both function as well as the operator?\n>\n> I will add that. I wasn't actually sure about the precedence of these\n> operators, so it is interesting as a regression test for that as well.\n>\n> +1.\n\n\n> > + if (num < 0)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> >\n> > This looks more of ERRCODE_FEATURE_NOT_SUPPORTED esp. since factorial\n> > of negative numbers is defined but we are not supporting it. I looked\n> > at some other usages of this error code. All of them are really are\n> > out of range value errors.\n>\n> The proposed error message says this is undefined. If we use an error\n> code that says it's not implemented, then the message should also\n> reflect that.\n\n\nYes. BTW, OUT_OF_RANGE is not exactly \"undefined\" either. I searched for an\nerror code for \"UNDEFINED\" result but didn't find any.\n\n\n> But that would in turn open an invitation for someone to\n> implement it, and I'm not sure we want that.\n\n\n It will be more complex code, so difficult to implement but why do we\nprevent why not.\n\n\n> It could go either way,\n> but we should be clear on what we want.\n>\n>\nDivison by zero is really undefined, 12345678 * 12345678 (just some\nnumbers) is out of range of say int4, but factorial of a negative number\nhas some meaning and is defined but PostgreSQL does not support it.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Tue, 16 Jun 2020 at 08:48, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-06-15 13:15, Ashutosh Bapat wrote:\n> Here are some comments\n> I see below in the .out but there's not corresponding SQL in .sql.\n> +SELECT factorial(-4);\n> + factorial\n> +-----------\n> + 1\n> +(1 row)\n> +\n> \n> Should we also add -4! to cover both function as well as the operator?\n\nI will add that. I wasn't actually sure about the precedence of these \noperators, so it is interesting as a regression test for that as well.\n+1. \n> + if (num < 0)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> \n> This looks more of ERRCODE_FEATURE_NOT_SUPPORTED esp. since factorial\n> of negative numbers is defined but we are not supporting it. I looked\n> at some other usages of this error code. All of them are really are\n> out of range value errors.\n\nThe proposed error message says this is undefined. If we use an error \ncode that says it's not implemented, then the message should also \nreflect that. Yes. BTW, OUT_OF_RANGE is not exactly \"undefined\" either. I searched for an error code for \"UNDEFINED\" result but didn't find any. But that would in turn open an invitation for someone to \nimplement it, and I'm not sure we want that. It will be more complex code, so difficult to implement but why do we prevent why not. It could go either way, \nbut we should be clear on what we want.\nDivison by zero is really undefined, 12345678 * 12345678 (just some numbers) is out of range of say int4, but factorial of a negative number has some meaning and is defined but PostgreSQL does not support it.-- Best Wishes,Ashutosh",
"msg_date": "Tue, 16 Jun 2020 10:30:20 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Tue, 16 Jun 2020 at 06:00, Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n>\n> Divison by zero is really undefined, 12345678 * 12345678 (just some numbers) is out of range of say int4, but factorial of a negative number has some meaning and is defined but PostgreSQL does not support it.\n>\n\nActually, I think undefined/out-of-range is the right error to throw here.\n\nMost common implementations do regard factorial as undefined for\nanything other than positive integers, as well as following the\nconvention that factorial(0) = 1. Some implementations extend the\nfactorial to non-integer inputs, negative inputs, or even complex\ninputs by defining it in terms of the gamma function. However, even\nthen, it is undefined for negative integer inputs.\n\nRegards,\nDean\n\n[1] https://en.wikipedia.org/wiki/Factorial\n[2] https://en.wikipedia.org/wiki/Gamma_function\n\n\n",
"msg_date": "Tue, 16 Jun 2020 08:31:21 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 08:31:21AM +0100, Dean Rasheed wrote:\n> On Tue, 16 Jun 2020 at 06:00, Ashutosh Bapat\n> <ashutosh.bapat@2ndquadrant.com> wrote:\n> >\n> > Divison by zero is really undefined, 12345678 * 12345678 (just some numbers) is out of range of say int4, but factorial of a negative number has some meaning and is defined but PostgreSQL does not support it.\n> >\n> \n> Actually, I think undefined/out-of-range is the right error to throw here.\n> \n> Most common implementations do regard factorial as undefined for\n> anything other than positive integers, as well as following the\n> convention that factorial(0) = 1. Some implementations extend the\n> factorial to non-integer inputs, negative inputs, or even complex\n> inputs by defining it in terms of the gamma function. However, even\n> then, it is undefined for negative integer inputs.\n\nWow, they define it for negative inputs, but not negative integer\ninputs? I am curious what the logic is behind that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 04:55:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 10:55 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Jun 16, 2020 at 08:31:21AM +0100, Dean Rasheed wrote:\n> >\n> > Most common implementations do regard factorial as undefined for\n> > anything other than positive integers, as well as following the\n> > convention that factorial(0) = 1. Some implementations extend the\n> > factorial to non-integer inputs, negative inputs, or even complex\n> > inputs by defining it in terms of the gamma function. However, even\n> > then, it is undefined for negative integer inputs.\n>\n> Wow, they define it for negative inputs, but not negative integer\n> inputs? I am curious what the logic is behind that.\n>\n\nIt is defined as NaN (or undefined), which is not in the realm of integer\nnumbers. You might get a clear idea of the logic from [1], where they also\nmake a case for the error being ERRCODE_DIVISION_BY_ZERO.\n\n[1] http://mathforum.org/library/drmath/view/60851.html\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Jun 16, 2020 at 10:55 AM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Jun 16, 2020 at 08:31:21AM +0100, Dean Rasheed wrote:> \n> Most common implementations do regard factorial as undefined for\n> anything other than positive integers, as well as following the\n> convention that factorial(0) = 1. Some implementations extend the\n> factorial to non-integer inputs, negative inputs, or even complex\n> inputs by defining it in terms of the gamma function. However, even\n> then, it is undefined for negative integer inputs.\n\nWow, they define it for negative inputs, but not negative integer\ninputs? I am curious what the logic is behind that.It is defined as NaN (or undefined), which is not in the realm of integer numbers. You might get a clear idea of the logic from [1], where they also make a case for the error being ERRCODE_DIVISION_BY_ZERO.[1] http://mathforum.org/library/drmath/view/60851.htmlRegards,Juan José Santamaría Flecha",
"msg_date": "Tue, 16 Jun 2020 11:08:56 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Tue, 16 Jun 2020 at 09:55, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Jun 16, 2020 at 08:31:21AM +0100, Dean Rasheed wrote:\n> >\n> > Most common implementations do regard factorial as undefined for\n> > anything other than positive integers, as well as following the\n> > convention that factorial(0) = 1. Some implementations extend the\n> > factorial to non-integer inputs, negative inputs, or even complex\n> > inputs by defining it in terms of the gamma function. However, even\n> > then, it is undefined for negative integer inputs.\n>\n> Wow, they define it for negative inputs, but not negative integer\n> inputs? I am curious what the logic is behind that.\n>\n\nThat's just the way the maths works out. The gamma function is\nwell-defined for all real and complex numbers except for zero and\nnegative integers, where it has poles (singularities/infinities).\nActually the gamma function isn't the only possible extension of the\nfactorial function, but it's the one nearly everyone uses, if they\nbother at all (most people don't).\n\nCuriously, the most widespread implementation is probably the\ncalculator in MS Windows.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 16 Jun 2020 10:34:38 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Tue, 16 Jun 2020 at 10:09, Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> It is defined as NaN (or undefined), which is not in the realm of integer numbers. You might get a clear idea of the logic from [1], where they also make a case for the error being ERRCODE_DIVISION_BY_ZERO.\n>\n> [1] http://mathforum.org/library/drmath/view/60851.html\n>\n\nHmm, I think ERRCODE_DIVISION_BY_ZERO should probably be reserved for\nactual division functions.\n\nWith [1], we could return 'Infinity', which would be more correct from\na mathematical point of view, and might be preferable to erroring-out\nin some contexts.\n\n[1] https://www.postgresql.org/message-id/606717.1591924582%40sss.pgh.pa.us\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 16 Jun 2020 10:49:59 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 11:50 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> On Tue, 16 Jun 2020 at 10:09, Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > It is defined as NaN (or undefined), which is not in the realm of\n> integer numbers. You might get a clear idea of the logic from [1], where\n> they also make a case for the error being ERRCODE_DIVISION_BY_ZERO.\n> >\n> > [1] http://mathforum.org/library/drmath/view/60851.html\n> >\n>\n> Hmm, I think ERRCODE_DIVISION_BY_ZERO should probably be reserved for\n> actual division functions.\n>\n> With [1], we could return 'Infinity', which would be more correct from\n> a mathematical point of view, and might be preferable to erroring-out\n> in some contexts.\n>\n> [1]\n> https://www.postgresql.org/message-id/606717.1591924582%40sss.pgh.pa.us\n\n\nReturning division-by-zero would be confusing for the user.\n\nI think that out-of-range would be a reasonable solution for \"FUNCTION\nfactorial(integer) RETURNS integer\", because it could only return an\ninteger when the input is a positive integer, but for \"FUNCTION\nfactorial(integer) RETURNS numeric\" the returned value should be 'NaN'\nwithout error.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Jun 16, 2020 at 11:50 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:On Tue, 16 Jun 2020 at 10:09, Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> It is defined as NaN (or undefined), which is not in the realm of integer numbers. You might get a clear idea of the logic from [1], where they also make a case for the error being ERRCODE_DIVISION_BY_ZERO.\n>\n> [1] http://mathforum.org/library/drmath/view/60851.html\n>\n\nHmm, I think ERRCODE_DIVISION_BY_ZERO should probably be reserved for\nactual division functions.\n\nWith [1], we could return 'Infinity', which would be more correct from\na mathematical point of view, and might be preferable to erroring-out\nin some contexts.\n\n[1] https://www.postgresql.org/message-id/606717.1591924582%40sss.pgh.pa.usReturning division-by-zero would be confusing for the user.I think that out-of-range would be a reasonable solution for \"FUNCTION factorial(integer) RETURNS integer\", because it could only return an integer when the input is a positive integer, but for \"FUNCTION factorial(integer) RETURNS numeric\" the returned value should be 'NaN' without error.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 16 Jun 2020 12:30:16 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On 2020-06-16 11:49, Dean Rasheed wrote:\n> With [1], we could return 'Infinity', which would be more correct from\n> a mathematical point of view, and might be preferable to erroring-out\n> in some contexts.\n\nBut the limit of the gamma function is either negative or positive \ninfinity, depending on from what side you come, so we can't just return \none of those two.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 13:18:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Tue, 16 Jun 2020 at 12:18, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-06-16 11:49, Dean Rasheed wrote:\n> > With [1], we could return 'Infinity', which would be more correct from\n> > a mathematical point of view, and might be preferable to erroring-out\n> > in some contexts.\n>\n> But the limit of the gamma function is either negative or positive\n> infinity, depending on from what side you come, so we can't just return\n> one of those two.\n>\n\nHmm yeah, although that's only really the case if you define it in\nterms of continuous real input values.\n\nI think you're probably right though. Raising an out-of-range error\nseems like the best option.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 16 Jun 2020 13:17:59 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On 2020-06-16 14:17, Dean Rasheed wrote:\n> I think you're probably right though. Raising an out-of-range error\n> seems like the best option.\n\ncommitted as proposed then\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 18 Jun 2020 09:13:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 9:13 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-06-16 14:17, Dean Rasheed wrote:\n> > I think you're probably right though. Raising an out-of-range error\n> > seems like the best option.\n>\n> committed as proposed then\n>\n\nThe gamma function from math.h returns a NaN for negative integer values,\nthe postgres factorial function returns a numeric, which allows NaN.\nRaising an out-of-range error seems only reasonable for an integer output.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Jun 18, 2020 at 9:13 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-06-16 14:17, Dean Rasheed wrote:\n> I think you're probably right though. Raising an out-of-range error\n> seems like the best option.\n\ncommitted as proposed then\n\nThe gamma function from math.h returns a NaN for negative integer values, the postgres factorial function returns a numeric, which allows NaN. Raising an out-of-range error seems only reasonable for an integer output.Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 18 Jun 2020 09:43:54 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On 2020-06-18 09:43, Juan José Santamaría Flecha wrote:\n> \n> On Thu, Jun 18, 2020 at 9:13 AM Peter Eisentraut \n> <peter.eisentraut@2ndquadrant.com \n> <mailto:peter.eisentraut@2ndquadrant.com>> wrote:\n> \n> On 2020-06-16 14:17, Dean Rasheed wrote:\n> > I think you're probably right though. Raising an out-of-range error\n> > seems like the best option.\n> \n> committed as proposed then\n> \n> \n> The gamma function from math.h returns a NaN for negative integer \n> values, the postgres factorial function returns a numeric, which allows \n> NaN. Raising an out-of-range error seems only reasonable for an integer \n> output.\n\nBut this is not the gamma function. The gamma function is undefined at \nzero, but factorial(0) returns 1. So this is similar but not the same.\n\nMoreover, functions such as log() also error out on unsupportable input \nvalues, so it's consistent with the spec.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 18 Jun 2020 13:57:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: factorial of negative numbers"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 1:57 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-06-18 09:43, Juan José Santamaría Flecha wrote:\n> > The gamma function from math.h returns a NaN for negative integer\n> > values, the postgres factorial function returns a numeric, which allows\n> > NaN. Raising an out-of-range error seems only reasonable for an integer\n> > output.\n>\n> But this is not the gamma function. The gamma function is undefined at\n> zero, but factorial(0) returns 1. So this is similar but not the same.\n>\n\nfactorial(n) = gamma(n + 1)\n\n\n> Moreover, functions such as log() also error out on unsupportable input\n> values, so it's consistent with the spec.\n>\n\nIf factorial() ever gets extended to other input types it might get\ninconsistent, should !(-1.0) also raise an error?\n\nLogarithm is just different case:\n\nhttps://en.wikipedia.org/wiki/Logarithm#/media/File:Log4.svg\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Jun 18, 2020 at 1:57 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-06-18 09:43, Juan José Santamaría Flecha wrote:> The gamma function from math.h returns a NaN for negative integer \n> values, the postgres factorial function returns a numeric, which allows \n> NaN. Raising an out-of-range error seems only reasonable for an integer \n> output.\n\nBut this is not the gamma function. The gamma function is undefined at \nzero, but factorial(0) returns 1. So this is similar but not the same.factorial(n) = gamma(n + 1) \nMoreover, functions such as log() also error out on unsupportable input \nvalues, so it's consistent with the spec.If factorial() ever gets extended to other input types it might get inconsistent, should !(-1.0) also raise an error?Logarithm is just different case:https://en.wikipedia.org/wiki/Logarithm#/media/File:Log4.svgRegards,Juan José Santamaría Flecha",
"msg_date": "Thu, 18 Jun 2020 14:19:10 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: factorial of negative numbers"
}
] |
[
{
"msg_contents": "Hello,\n\n\nThe tables for pg_stat_ views in the following page, starting from Table 27.3, have only one column in PG 13. They had 3 columns in PG 12 and earlier.\n\nhttps://www.postgresql.org/docs/13/monitoring-stats.html\n\nIs this intentional? It has become a bit unfriendly to read for me, a visually impaired user who uses screen reader software.\n\nThe tables for functions in Chapter 9 are similar.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Mon, 15 Jun 2020 07:49:11 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[bug?] Is the format of tables in the documentation broken in PG 13?"
},
{
"msg_contents": "> On 15 Jun 2020, at 09:49, tsunakawa.takay@fujitsu.com wrote:\n\n> Is this intentional?\n\nYes, this was a deliberate change made to be able to fit more expansive\ndescriptions of columns etc.\n\n> It has become a bit unfriendly to read for me, a visually impaired user who uses screen reader software.\n\nThat's less good. The W3C Web Accessibility Initiative has guidance for multi-\nlevel header tables which might be useful here.\n\n\thttps://www.w3.org/WAI/tutorials/tables/multi-level/\n\nMaybe if we use the id and headers attributes we can give screen readers enough\nclues to make sense of the information?\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 15 Jun 2020 10:54:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [bug?] Is the format of tables in the documentation broken in PG\n 13?"
},
{
"msg_contents": "From: Daniel Gustafsson <daniel@yesql.se>\n> Yes, this was a deliberate change made to be able to fit more expansive\n> descriptions of columns etc.\n\nThanks for your quick response and information. I'm relieved to know that it was not broken.\n\n\n> That's less good. The W3C Web Accessibility Initiative has guidance for\n> multi-\n> level header tables which might be useful here.\n> \n> \thttps://www.w3.org/WAI/tutorials/tables/multi-level/\n> \n> Maybe if we use the id and headers attributes we can give screen readers\n> enough\n> clues to make sense of the information?\n\nHm, I think I'll look into this.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 00:21:52 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug?] Is the format of tables in the documentation broken in PG\n 13?"
},
{
"msg_contents": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> From: Daniel Gustafsson <daniel@yesql.se>\n>> That's less good. The W3C Web Accessibility Initiative has guidance for\n>> multi-level header tables which might be useful here.\n>> https://www.w3.org/WAI/tutorials/tables/multi-level/\n>> Maybe if we use the id and headers attributes we can give screen readers\n>> enough clues to make sense of the information?\n\n> Hm, I think I'll look into this.\n\nPlease do. I looked at the referenced link a bit, but it wasn't clear\nto me that they suggested anything useful to do :-(.\n\nIt'd probably be best to discuss this on the pgsql-docs list.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jun 2020 21:00:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [bug?] Is the format of tables in the documentation broken in PG\n 13?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have begun my annual study of WAL consistency across replays, and\nwal_consistency_checking = 'all' is pointing out at some issues with\nat least VACUUM and SPGist:\nFATAL: inconsistent page found, rel 1663/16385/22133, forknum 0,\nblkno 15\nCONTEXT: WAL redo at 0/739CEDE8 for SPGist/VACUUM_REDIRECT: newest\nXID 4619\n\nIt may be possible that there are other failures, I have just run\ninstallcheck and this is the first failure I saw after replaying all\nthe generated WAL on a standby. Please note that I have also checked\n12, and installcheck passes.\n\nThanks,\n--\nMichael",
"msg_date": "Mon, 15 Jun 2020 22:14:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Failures with wal_consistency_checking and 13~"
},
{
"msg_contents": "Em seg., 15 de jun. de 2020 às 10:14, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> Hi all,\n>\n> I have begun my annual study of WAL consistency across replays, and\n> wal_consistency_checking = 'all' is pointing out at some issues with\n> at least VACUUM and SPGist:\n> FATAL: inconsistent page found, rel 1663/16385/22133, forknum 0,\n> blkno 15\n> CONTEXT: WAL redo at 0/739CEDE8 for SPGist/VACUUM_REDIRECT: newest\n> XID 4619\n>\n> It may be possible that there are other failures, I have just run\n> installcheck and this is the first failure I saw after replaying all\n> the generated WAL on a standby. Please note that I have also checked\n> 12, and installcheck passes.\n>\nWith Postgres 13, Windows 10 (home), msvc 2019 64 bits,\nShutting down, without interrupting the database, consistently, this log\nhas appeared.\n\n2020-06-15 21:40:35.998 -03 [3380] LOG: database system shutdown was\ninterrupted; last known up at 2020-06-15 21:36:01 -03\n2020-06-15 21:40:37.978 -03 [3380] LOG: database system was not properly\nshut down; automatic recovery in progress\n2020-06-15 21:40:37.992 -03 [3380] LOG: redo starts at 0/8A809C78\n2020-06-15 21:40:37.992 -03 [3380] LOG: invalid record length at\n0/8A809CB0: wanted 24, got 0\n2020-06-15 21:40:37.992 -03 [3380] LOG: redo done at 0/8A809C78\n\nregards,\nRanier Vilela\n\nEm seg., 15 de jun. de 2020 às 10:14, Michael Paquier <michael@paquier.xyz> escreveu:Hi all,\n\nI have begun my annual study of WAL consistency across replays, and\nwal_consistency_checking = 'all' is pointing out at some issues with\nat least VACUUM and SPGist:\nFATAL: inconsistent page found, rel 1663/16385/22133, forknum 0,\nblkno 15\nCONTEXT: WAL redo at 0/739CEDE8 for SPGist/VACUUM_REDIRECT: newest\nXID 4619\n\nIt may be possible that there are other failures, I have just run\ninstallcheck and this is the first failure I saw after replaying all\nthe generated WAL on a standby. Please note that I have also checked\n12, and installcheck passes.With Postgres 13, Windows 10 (home), msvc 2019 64 bits, Shutting down, without interrupting the database, consistently, this log has appeared.2020-06-15 21:40:35.998 -03 [3380] LOG: database system shutdown was interrupted; last known up at 2020-06-15 21:36:01 -032020-06-15 21:40:37.978 -03 [3380] LOG: database system was not properly shut down; automatic recovery in progress2020-06-15 21:40:37.992 -03 [3380] LOG: redo starts at 0/8A809C782020-06-15 21:40:37.992 -03 [3380] LOG: invalid record length at 0/8A809CB0: wanted 24, got 02020-06-15 21:40:37.992 -03 [3380] LOG: redo done at 0/8A809C78 regards,Ranier Vilela",
"msg_date": "Mon, 15 Jun 2020 23:33:42 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failures with wal_consistency_checking and 13~"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 11:33:42PM -0300, Ranier Vilela wrote:\n> With Postgres 13, Windows 10 (home), msvc 2019 64 bits,\n> Shutting down, without interrupting the database, consistently, this log\n> has appeared.\n> \n> 2020-06-15 21:40:35.998 -03 [3380] LOG: database system shutdown was\n> interrupted; last known up at 2020-06-15 21:36:01 -03\n> 2020-06-15 21:40:37.978 -03 [3380] LOG: database system was not properly\n> shut down; automatic recovery in progress\n> 2020-06-15 21:40:37.992 -03 [3380] LOG: redo starts at 0/8A809C78\n> 2020-06-15 21:40:37.992 -03 [3380] LOG: invalid record length at\n> 0/8A809CB0: wanted 24, got 0\n> 2020-06-15 21:40:37.992 -03 [3380] LOG: redo done at 0/8A809C78\n\nCould you please keep discussions on their own thread please? After\nstopping Postgres in a sudden way (immediate mode or just SIGKILL), it\nis normal to see crash recovery happen, as well as it is normal to see\nan \"invalid record length\" in the logs when the end of WAL is reached,\nmeaning the end of recovery.\n\nThanks.\n--\nMichael",
"msg_date": "Tue, 16 Jun 2020 12:55:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Failures with wal_consistency_checking and 13~"
},
{
"msg_contents": "On 2020-Jun-15, Michael Paquier wrote:\n\n> I have begun my annual study of WAL consistency across replays, and\n> wal_consistency_checking = 'all' is pointing out at some issues with\n> at least VACUUM and SPGist:\n> FATAL: inconsistent page found, rel 1663/16385/22133, forknum 0,\n> blkno 15\n> CONTEXT: WAL redo at 0/739CEDE8 for SPGist/VACUUM_REDIRECT: newest\n> XID 4619\n> \n> It may be possible that there are other failures, I have just run\n> installcheck and this is the first failure I saw after replaying all\n> the generated WAL on a standby. Please note that I have also checked\n> 12, and installcheck passes.\n\nUmm. Alexander, do you an idea of what this is about?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 15:33:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Failures with wal_consistency_checking and 13~"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 10:34 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jun-15, Michael Paquier wrote:\n>\n> > I have begun my annual study of WAL consistency across replays, and\n> > wal_consistency_checking = 'all' is pointing out at some issues with\n> > at least VACUUM and SPGist:\n> > FATAL: inconsistent page found, rel 1663/16385/22133, forknum 0,\n> > blkno 15\n> > CONTEXT: WAL redo at 0/739CEDE8 for SPGist/VACUUM_REDIRECT: newest\n> > XID 4619\n> >\n> > It may be possible that there are other failures, I have just run\n> > installcheck and this is the first failure I saw after replaying all\n> > the generated WAL on a standby. Please note that I have also checked\n> > 12, and installcheck passes.\n>\n> Umm. Alexander, do you an idea of what this is about?\n\nI don't have idea yet, but I'll check this out\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 20 Jun 2020 13:16:52 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Failures with wal_consistency_checking and 13~"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 1:16 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Fri, Jun 19, 2020 at 10:34 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2020-Jun-15, Michael Paquier wrote:\n> >\n> > > I have begun my annual study of WAL consistency across replays, and\n> > > wal_consistency_checking = 'all' is pointing out at some issues with\n> > > at least VACUUM and SPGist:\n> > > FATAL: inconsistent page found, rel 1663/16385/22133, forknum 0,\n> > > blkno 15\n> > > CONTEXT: WAL redo at 0/739CEDE8 for SPGist/VACUUM_REDIRECT: newest\n> > > XID 4619\n> > >\n> > > It may be possible that there are other failures, I have just run\n> > > installcheck and this is the first failure I saw after replaying all\n> > > the generated WAL on a standby. Please note that I have also checked\n> > > 12, and installcheck passes.\n> >\n> > Umm. Alexander, do you an idea of what this is about?\n>\n> I don't have idea yet, but I'll check this out\n\nI have discovered and fixed the issue in a44dd932ff. spg_mask()\nmasked unused space only when pagehdr->pd_lower >\nSizeOfPageHeaderData. But during the vacuum regression tests, one\npage has been erased completely and pagehdr->pd_lower was set to\nSizeOfPageHeaderData. Actually, 13 didn't introduce any issue, it\njust added a test that spotted the issue. The issue is here since\na507b86900.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 20 Jun 2020 17:43:19 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failures with wal_consistency_checking and 13~"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 05:43:19PM +0300, Alexander Korotkov wrote:\n> I have discovered and fixed the issue in a44dd932ff. spg_mask()\n> masked unused space only when pagehdr->pd_lower >\n> SizeOfPageHeaderData. But during the vacuum regression tests, one\n> page has been erased completely and pagehdr->pd_lower was set to\n> SizeOfPageHeaderData. Actually, 13 didn't introduce any issue, it\n> just added a test that spotted the issue. The issue is here since\n> a507b86900.\n\nThanks Alexander for looking at this issue! My runs with\nwal_consistency_checking are all clear now.\n--\nMichael",
"msg_date": "Mon, 22 Jun 2020 14:15:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Failures with wal_consistency_checking and 13~"
}
] |
[
{
"msg_contents": "Hi all,\n\nAttempting to run installcheck with 13~ and a value of work_mem lower\nthan the default causes two failures, both related to incremental\nsorts (here work_mem = 1MB):\n1) Test incremental_sort:\n@@ -4,12 +4,13 @@\n select * from (select * from tenk1 order by four) t order by four, ten;\n QUERY PLAN \n -----------------------------------\n- Sort\n+ Incremental Sort\n Sort Key: tenk1.four, tenk1.ten\n+ Presorted Key: tenk1.four\n -> Sort\n Sort Key: tenk1.four\n -> Seq Scan on tenk1\n-(5 rows)\n+(6 rows)\n\n2) Test join:\n@@ -2368,12 +2368,13 @@\n -> Merge Left Join\n Merge Cond: (x.thousand = y.unique2)\n Join Filter: ((x.twothousand = y.hundred) AND (x.fivethous = y.unique2))\n- -> Sort\n+ -> Incremental Sort\n Sort Key: x.thousand, x.twothousand, x.fivethous\n- -> Seq Scan on tenk1 x\n+ Presorted Key: x.thousand\n+ -> Index Scan using tenk1_thous_tenthous on tenk1 x\n -> Materialize\n -> Index Scan using tenk1_unique2 on tenk1 y\n-(9 rows)\n+(10 rows)\n\nThere are of course regression failures when changing the relation\npage size or such, but we should have tests more portable when it\ncomes to work_mem (this issue does not exist in ~12) or people running\ninstallcheck on a new instance would be surprised. Please note that I\nhave not looked at the problem in details, but a simple solution would\nbe to enforce work_mem in those code paths to keep the two plans\nstable.\n\nThanks,\n--\nMichael",
"msg_date": "Mon, 15 Jun 2020 22:29:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Failures with installcheck and low work_mem value in 13~"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 10:29:41PM +0900, Michael Paquier wrote:\n> Attempting to run installcheck with 13~ and a value of work_mem lower\n> than the default causes two failures, both related to incremental\n> sorts (here work_mem = 1MB):\n> 1) Test incremental_sort:\n> @@ -4,12 +4,13 @@\n> select * from (select * from tenk1 order by four) t order by four, ten;\n> QUERY PLAN \n> -----------------------------------\n> - Sort\n> + Incremental Sort\n> Sort Key: tenk1.four, tenk1.ten\n> + Presorted Key: tenk1.four\n> -> Sort\n> Sort Key: tenk1.four\n> -> Seq Scan on tenk1\n> -(5 rows)\n> +(6 rows)\n\nLooking at this one, it happens that this is the first test in\nincremental_sort.sql, and we have the following comment:\n-- When we have to sort the entire table, incremental sort will\n-- be slower than plain sort, so it should not be used.\nexplain (costs off)\nselect * from (select * from tenk1 order by four) t order by four, ten;\n\nWhen using such a low value of work_mem, why do we switch to an\nincremental sort if we know that it is going to be slower than a plain\nsort? Something looks wrong in the planner choice here.\n--\nMichael",
"msg_date": "Tue, 16 Jun 2020 14:39:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Failures with installcheck and low work_mem value in 13~"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 02:39:47PM +0900, Michael Paquier wrote:\n>On Mon, Jun 15, 2020 at 10:29:41PM +0900, Michael Paquier wrote:\n>> Attempting to run installcheck with 13~ and a value of work_mem lower\n>> than the default causes two failures, both related to incremental\n>> sorts (here work_mem = 1MB):\n>> 1) Test incremental_sort:\n>> @@ -4,12 +4,13 @@\n>> select * from (select * from tenk1 order by four) t order by four, ten;\n>> QUERY PLAN\n>> -----------------------------------\n>> - Sort\n>> + Incremental Sort\n>> Sort Key: tenk1.four, tenk1.ten\n>> + Presorted Key: tenk1.four\n>> -> Sort\n>> Sort Key: tenk1.four\n>> -> Seq Scan on tenk1\n>> -(5 rows)\n>> +(6 rows)\n>\n>Looking at this one, it happens that this is the first test in\n>incremental_sort.sql, and we have the following comment:\n>-- When we have to sort the entire table, incremental sort will\n>-- be slower than plain sort, so it should not be used.\n>explain (costs off)\n>select * from (select * from tenk1 order by four) t order by four, ten;\n>\n>When using such a low value of work_mem, why do we switch to an\n>incremental sort if we know that it is going to be slower than a plain\n>sort? Something looks wrong in the planner choice here.\n\nI don't think it's particularly wrong. The \"full sort\" can't be done in\nmemory with such low work_mem value, while the incremental sort can. So\nI think the planner choice is sane, it's more than the comment does not\nexplicitly state this depends on work_mem too.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 19:05:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Failures with installcheck and low work_mem value in 13~"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 10:29:41PM +0900, Michael Paquier wrote:\n>Hi all,\n>\n>Attempting to run installcheck with 13~ and a value of work_mem lower\n>than the default causes two failures, both related to incremental\n>sorts (here work_mem = 1MB):\n>1) Test incremental_sort:\n>@@ -4,12 +4,13 @@\n> select * from (select * from tenk1 order by four) t order by four, ten;\n> QUERY PLAN\n> -----------------------------------\n>- Sort\n>+ Incremental Sort\n> Sort Key: tenk1.four, tenk1.ten\n>+ Presorted Key: tenk1.four\n> -> Sort\n> Sort Key: tenk1.four\n> -> Seq Scan on tenk1\n>-(5 rows)\n>+(6 rows)\n>\n>2) Test join:\n>@@ -2368,12 +2368,13 @@\n> -> Merge Left Join\n> Merge Cond: (x.thousand = y.unique2)\n> Join Filter: ((x.twothousand = y.hundred) AND (x.fivethous = y.unique2))\n>- -> Sort\n>+ -> Incremental Sort\n> Sort Key: x.thousand, x.twothousand, x.fivethous\n>- -> Seq Scan on tenk1 x\n>+ Presorted Key: x.thousand\n>+ -> Index Scan using tenk1_thous_tenthous on tenk1 x\n> -> Materialize\n> -> Index Scan using tenk1_unique2 on tenk1 y\n>-(9 rows)\n>+(10 rows)\n>\n>There are of course regression failures when changing the relation\n>page size or such, but we should have tests more portable when it\n>comes to work_mem (this issue does not exist in ~12) or people running\n>installcheck on a new instance would be surprised. Please note that I\n>have not looked at the problem in details, but a simple solution would\n>be to enforce work_mem in those code paths to keep the two plans\n>stable.\n>\n\nI don't think the tests can be made not to depend on work_mem, because\nit costing of sort / incremental sort depends on the value. I agree\nsetting the work_mem at the beginning of the test script is the right\nsolution.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 19:08:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Failures with installcheck and low work_mem value in 13~"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I don't think the tests can be made not to depend on work_mem, because\n> it costing of sort / incremental sort depends on the value. I agree\n> setting the work_mem at the beginning of the test script is the right\n> solution.\n\nI'm a bit skeptical about changing anything here. There are quite\na large number of GUCs that can affect the regression results, and\nit wouldn't be sane to try to force them all to fixed values. For\none thing, that'd be a PITA to maintain, and for another, it's not\ninfrequently useful to run the tests with nonstandard settings to\nsee what happens.\n\nIs there a good reason for being concerned about work_mem in particular\nand this test script in particular?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jun 2020 13:27:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failures with installcheck and low work_mem value in 13~"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > I don't think the tests can be made not to depend on work_mem, because\n> > it costing of sort / incremental sort depends on the value. I agree\n> > setting the work_mem at the beginning of the test script is the right\n> > solution.\n>\n> I'm a bit skeptical about changing anything here. There are quite\n> a large number of GUCs that can affect the regression results, and\n> it wouldn't be sane to try to force them all to fixed values. For\n> one thing, that'd be a PITA to maintain, and for another, it's not\n> infrequently useful to run the tests with nonstandard settings to\n> see what happens.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 19 Jun 2020 10:28:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Failures with installcheck and low work_mem value in 13~"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 10:28:56AM -0700, Peter Geoghegan wrote:\n> On Fri, Jun 19, 2020 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm a bit skeptical about changing anything here. There are quite\n>> a large number of GUCs that can affect the regression results, and\n>> it wouldn't be sane to try to force them all to fixed values. For\n>> one thing, that'd be a PITA to maintain, and for another, it's not\n>> infrequently useful to run the tests with nonstandard settings to\n>> see what happens.\n> \n> +1\n\nWe cared about such plan stability that in the past FWIW, see for\nexample c588df9 as work_mem is a setting that people like to change.\nWhy should this be different? work_mem is a popular configuration\nsetting. Perhaps people will not complain about that being an issue\nif running installcheck, we'll know with the time. Anyway, I am fine\nto just change my default configuration if the conclusion is to not\ntouch that and let it be, but I find a bit annoying that switching\nwork_mem from 4MB to 1MB is enough to destabilize the tests. And this\nworked just fine in past releases.\n--\nMichael",
"msg_date": "Sat, 20 Jun 2020 11:48:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Failures with installcheck and low work_mem value in 13~"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 7:48 PM Michael Paquier <michael@paquier.xyz> wrote:\n> We cared about such plan stability that in the past FWIW, see for\n> example c588df9 as work_mem is a setting that people like to change.\n> Why should this be different? work_mem is a popular configuration\n> setting.\n\nThe RMT met today. We determined that it wasn't worth adjusting this\ntest to pass with non-standard work_mem values.\n\n\"make installcheck\" also fails with lower random_page_cost settings.\nThere doesn't seem to be any reason to permit a non-standard setting\nto cause installcheck to fail elsewhere, while not tolerating the same\nissue here, with work_mem.\n\nThanks\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Jun 2020 12:15:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Failures with installcheck and low work_mem value in 13~"
},
{
"msg_contents": "On Thu, Jun 25, 2020 at 12:15:54PM -0700, Peter Geoghegan wrote:\n> The RMT met today. We determined that it wasn't worth adjusting this\n> test to pass with non-standard work_mem values.\n\nOkay, thanks for the feedback. We'll see how it works out.\n--\nMichael",
"msg_date": "Fri, 26 Jun 2020 10:03:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Failures with installcheck and low work_mem value in 13~"
}
] |
[
{
"msg_contents": "Hi,\n\nWe've removed the use of \"slave\" from most of the repo (one use\nremained, included here), but we didn't do the same for master. In the\nattached series I replaced most of the uses.\n\n0001: tap tests: s/master/primary/\n Pretty clear cut imo.\n\n0002: code: s/master/primary/\n This also includes a few minor other changes (s/in master/on the\n primary/, a few 'the's added). Perhaps it'd be better to do those\n separately?\n\n0003: code: s/master/leader/\n This feels pretty obvious. We've largely used the leader / worker\n terminology, but there were a few uses of master left.\n\n0004: code: s/master/$other/\n This is most of the remaining uses of master in code. A number of\n references to 'master' in the context of toast, a few uses of 'master\n copy'. I guess some of these are a bit less clear cut.\n\n0005: docs: s/master/primary/\n These seem mostly pretty straightforward to me. The changes in\n high-availability.sgml probably deserve the most attention.\n\n0006: docs: s/master/root/\n Here using root seems a lot better than master anyway (master seems\n confusing in regard to inheritance scenarios). But perhaps parent\n would be better? Went with root since it's about the topmost table.\n\n0007: docs: s/master/supervisor/\n I guess this could be a bit more contentious. Supervisor seems clearer\n to me, but I can see why people would disagree. See also later point\n about changes I have not done at this stage.\n\n0008: docs: WIP multi-master rephrasing.\n I like neither the new nor the old language much. I'd welcome input.\n\n\nAfter this series there are only two widespread use of 'master' in the\ntree.\n1) 'postmaster'. As changing that would be somewhat invasive, the word\n is a bit more ambiguous, and it's largely just internal, I've left\n this alone for now. I personally would rather see this renamed as\n supervisor, which'd imo actually would also be a lot more\n descriptive. I'm willing to do the work, but only if there's at least\n some agreement.\n2) 'master' as a reference to the branch. Personally I be in favor of\n changing the branch name, but it seems like it'd be better done as a\n somewhat separate discussion to me, as it affects development\n practices to some degree.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 15 Jun 2020 11:22:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "language cleanups in code and docs"
},
{
"msg_contents": "> On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:\n\nThanks for picking this up!\n\n> 1) 'postmaster'. As changing that would be somewhat invasive, the word\n> is a bit more ambiguous, and it's largely just internal, I've left\n> this alone for now. I personally would rather see this renamed as\n> supervisor, which'd imo actually would also be a lot more\n> descriptive. I'm willing to do the work, but only if there's at least\n> some agreement.\n\nFWIW, I've never really liked the name postmaster as I don't think it conveys\nmeaning. I support renaming to supervisor or a similar term.\n\ncheers ./daniel\n\n\n",
"msg_date": "Mon, 15 Jun 2020 21:04:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 7:04 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:\n>\n> Thanks for picking this up!\n>\n> > 1) 'postmaster'. As changing that would be somewhat invasive, the word\n> > is a bit more ambiguous, and it's largely just internal, I've left\n> > this alone for now. I personally would rather see this renamed as\n> > supervisor, which'd imo actually would also be a lot more\n> > descriptive. I'm willing to do the work, but only if there's at least\n> > some agreement.\n>\n> FWIW, I've never really liked the name postmaster as I don't think it conveys\n> meaning. I support renaming to supervisor or a similar term.\n\n+1. Postmaster has always sounded like a mailer daemon or something,\nwhich we ain't.\n\n\n",
"msg_date": "Tue, 16 Jun 2020 09:53:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 09:53:34AM +1200, Thomas Munro wrote:\n> On Tue, Jun 16, 2020 at 7:04 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Thanks for picking this up!\n> >\n> > > 1) 'postmaster'. As changing that would be somewhat invasive, the word\n> > > is a bit more ambiguous, and it's largely just internal, I've left\n> > > this alone for now. I personally would rather see this renamed as\n> > > supervisor, which'd imo actually would also be a lot more\n> > > descriptive. I'm willing to do the work, but only if there's at least\n> > > some agreement.\n> >\n> > FWIW, I've never really liked the name postmaster as I don't think it conveys\n> > meaning. I support renaming to supervisor or a similar term.\n> \n> +1. Postmaster has always sounded like a mailer daemon or something,\n> which we ain't.\n\nPostmaster is historically confused with the postmaster email account:\n\n\thttps://en.wikipedia.org/wiki/Postmaster_(computing)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 15 Jun 2020 18:35:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:\n>> 1) 'postmaster'. As changing that would be somewhat invasive, the word\n>> is a bit more ambiguous, and it's largely just internal, I've left\n>> this alone for now. I personally would rather see this renamed as\n>> supervisor, which'd imo actually would also be a lot more\n>> descriptive. I'm willing to do the work, but only if there's at least\n>> some agreement.\n\n> FWIW, I've never really liked the name postmaster as I don't think it conveys\n> meaning. I support renaming to supervisor or a similar term.\n\nMeh. That's carrying PC naming foibles to the point where they\nnegatively affect our users (by breaking start scripts and such).\nI think we should leave this alone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jun 2020 19:54:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-15 19:54:25 -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:\n> >> 1) 'postmaster'. As changing that would be somewhat invasive, the word\n> >> is a bit more ambiguous, and it's largely just internal, I've left\n> >> this alone for now. I personally would rather see this renamed as\n> >> supervisor, which'd imo actually would also be a lot more\n> >> descriptive. I'm willing to do the work, but only if there's at least\n> >> some agreement.\n>\n> > FWIW, I've never really liked the name postmaster as I don't think it conveys\n> > meaning. I support renaming to supervisor or a similar term.\n>\n> Meh. That's carrying PC naming foibles to the point where they\n> negatively affect our users (by breaking start scripts and such).\n> I think we should leave this alone.\n\npostmaster is just a symlink, which we very well could just leave in\nplace... I was really just thinking of the code level stuff. And I think\nthere's some clarity reasons to rename it as well (see comments by\nothers in the thread).\n\nAnyway, for now my focus is on patches in the series...\n\n- Andres\n\n\n",
"msg_date": "Mon, 15 Jun 2020 17:23:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 4:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meh. That's carrying PC naming foibles to the point where they\n> negatively affect our users (by breaking start scripts and such).\n> I think we should leave this alone.\n\n+1. Apart from the practical considerations, I just don't see a\nproblem with the word postmaster. My mother is a postmistress.\n\nI'm in favor of updating any individual instances of the word \"master\"\nto the preferred equivalent in code and code comments, though.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 15 Jun 2020 22:31:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 2:23 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-06-15 19:54:25 -0400, Tom Lane wrote:\n> > Daniel Gustafsson <daniel@yesql.se> writes:\n> > > On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:\n> > >> 1) 'postmaster'. As changing that would be somewhat invasive, the word\n> > >> is a bit more ambiguous, and it's largely just internal, I've left\n> > >> this alone for now. I personally would rather see this renamed as\n> > >> supervisor, which'd imo actually would also be a lot more\n> > >> descriptive. I'm willing to do the work, but only if there's at least\n> > >> some agreement.\n> >\n> > > FWIW, I've never really liked the name postmaster as I don't think it\n> conveys\n> > > meaning. I support renaming to supervisor or a similar term.\n> >\n> > Meh. That's carrying PC naming foibles to the point where they\n> > negatively affect our users (by breaking start scripts and such).\n> > I think we should leave this alone.\n>\n> postmaster is just a symlink, which we very well could just leave in\n> place... I was really just thinking of the code level stuff. And I think\n> there's some clarity reasons to rename it as well (see comments by\n> others in the thread).\n>\n>\nIs the symlink even used? If not we could just get rid of it.\n\nI'd be more worried about for example postmaster.pid, which would break a\n*lot* of scripts and integrations. postmaster is also exposed in the system\ncatalogs.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jun 16, 2020 at 2:23 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-06-15 19:54:25 -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:\n> >> 1) 'postmaster'. As changing that would be somewhat invasive, the word\n> >> is a bit more ambiguous, and it's largely just internal, I've left\n> >> this alone for now. I personally would rather see this renamed as\n> >> supervisor, which'd imo actually would also be a lot more\n> >> descriptive. I'm willing to do the work, but only if there's at least\n> >> some agreement.\n>\n> > FWIW, I've never really liked the name postmaster as I don't think it conveys\n> > meaning. I support renaming to supervisor or a similar term.\n>\n> Meh. That's carrying PC naming foibles to the point where they\n> negatively affect our users (by breaking start scripts and such).\n> I think we should leave this alone.\n\npostmaster is just a symlink, which we very well could just leave in\nplace... I was really just thinking of the code level stuff. And I think\nthere's some clarity reasons to rename it as well (see comments by\nothers in the thread).Is the symlink even used? If not we could just get rid of it. I'd be more worried about for example postmaster.pid, which would break a *lot* of scripts and integrations. postmaster is also exposed in the system catalogs.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 16 Jun 2020 09:26:34 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 6/16/20 3:26 AM, Magnus Hagander wrote:\n> On Tue, Jun 16, 2020 at 2:23 AM Andres Freund wrote:\n> postmaster is just a symlink, which we very well could just leave in\n> place... I was really just thinking of the code level stuff. And I think\n> there's some clarity reasons to rename it as well (see comments by\n> others in the thread).\n> \n> Is the symlink even used? If not we could just get rid of it. \n\n\nI am pretty sure that last time I checked Devrim was still using it in his\nsystemd unit file bundled with the PGDG rpms, although that was probably a\ncouple of years ago.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Tue, 16 Jun 2020 09:10:49 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "\nOn 6/16/20 9:10 AM, Joe Conway wrote:\n> On 6/16/20 3:26 AM, Magnus Hagander wrote:\n>> On Tue, Jun 16, 2020 at 2:23 AM Andres Freund wrote:\n>> postmaster is just a symlink, which we very well could just leave in\n>> place... I was really just thinking of the code level stuff. And I think\n>> there's some clarity reasons to rename it as well (see comments by\n>> others in the thread).\n>>\n>> Is the symlink even used? If not we could just get rid of it. \n>\n> I am pretty sure that last time I checked Devrim was still using it in his\n> systemd unit file bundled with the PGDG rpms, although that was probably a\n> couple of years ago.\n>\n\n\nJust checked a recent install and it's there.\n\n\nHonestly, I think I'm with Tom, and we can just let this one alone.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 09:19:32 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> I'd be more worried about for example postmaster.pid, which would break a\n> *lot* of scripts and integrations. postmaster is also exposed in the system\n> catalogs.\n\nOooh, that's an excellent point. A lot of random stuff knows that file\nname.\n\nTo be clear, I'm not against removing incidental uses of the word\n\"master\". But the specific case of \"postmaster\" seems a little too\nfar ingrained to be worth changing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 09:30:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 11:22:35AM -0700, Andres Freund wrote:\n> 0006: docs: s/master/root/\n> Here using root seems a lot better than master anyway (master seems\n> confusing in regard to inheritance scenarios). But perhaps parent\n> would be better? Went with root since it's about the topmost table.\n\nBecause we allow multiple levels of inheritance, I have always wanted a\nclear term for the top-most parent.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 12:47:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Hi Andres,\n\nThanks for doing this!\n\nOn 6/15/20 2:22 PM, Andres Freund wrote:\n> \n> We've removed the use of \"slave\" from most of the repo (one use\n> remained, included here), but we didn't do the same for master. In the\n> attached series I replaced most of the uses.\n> \n> 0001: tap tests: s/master/primary/\n> Pretty clear cut imo.\n\nNothing to argue with here as far as I can see. It's a lot of churn, \nthough, so the sooner it goes in the better so people can update for the \nnext CF.\n\n> 0002: code: s/master/primary/\n> This also includes a few minor other changes (s/in master/on the\n> primary/, a few 'the's added). Perhaps it'd be better to do those\n> separately?\n\nI would only commit the grammar/language separately if the commit as a \nwhole does not go in. Some of them really do make the comments much \nclearer beyond just in/on.\n\nI think the user-facing messages, e.g.:\n\n-\t\t\t errhint(\"Make sure the configuration parameter \\\"%s\\\" is set on the \nmaster server.\",\n+\t\t\t errhint(\"Make sure the configuration parameter \\\"%s\\\" is set on the \nprimary server.\",\n\nshould go in no matter what so we are consistent with our documentation. \nSame for the postgresql.conf updates.\n\n> 0003: code: s/master/leader/\n> This feels pretty obvious. We've largely used the leader / worker\n> terminology, but there were a few uses of master left.\n\nYeah, leader already outnumbers master by a lot. Looks good.\n\n> 0004: code: s/master/$other/\n> This is most of the remaining uses of master in code. A number of\n> references to 'master' in the context of toast, a few uses of 'master\n> copy'. I guess some of these are a bit less clear cut.\n\nNot sure I love authoritative, e.g.\n\n+\t * fullPageWrites is the authoritative value used by all backends to\n\nand\n\n+\t * grabbed a WAL insertion lock to read the authoritative value in\n\nPossibly \"shared\"?\n\n+\t * Create the Tcl interpreter subsidiary to pltcl_hold_interp.\n\nMaybe use \"worker\" here? Not much we can do about the Tcl function name, \nthough. It's pretty localized, though, so may not matter much.\n\n> 0005: docs: s/master/primary/\n> These seem mostly pretty straightforward to me. The changes in\n> high-availability.sgml probably deserve the most attention.\n\n+ on primary and the current time on the standby. Delays in transfer\n\non *the* primary\n\nI saw a few places where readability could be improved, but this patch \ndid not make any of them worse, and did make a few better.\n\n> 0006: docs: s/master/root/\n> Here using root seems a lot better than master anyway (master seems\n> confusing in regard to inheritance scenarios). But perhaps parent\n> would be better? Went with root since it's about the topmost table.\n\nI looked through to see if there was an instance of parent that should \nbe changed to root but I didn't see any. Even if there are, it's no \nworse than before.\n\n> 0007: docs: s/master/supervisor/\n> I guess this could be a bit more contentious. Supervisor seems clearer\n> to me, but I can see why people would disagree. See also later point\n> about changes I have not done at this stage.\n\nSupervisor seems OK to me, but the postmaster has a special place since \nthere is only one of them. Main supervisor, root supervisor?\n\n> 0008: docs: WIP multi-master rephrasing.\n> I like neither the new nor the old language much. I'd welcome input.\n\nWhy not multi-primary?\n\n> After this series there are only two widespread use of 'master' in the\n> tree.\n> 1) 'postmaster'. As changing that would be somewhat invasive, the word\n> is a bit more ambiguous, and it's largely just internal, I've left\n> this alone for now. I personally would rather see this renamed as\n> supervisor, which'd imo actually would also be a lot more\n> descriptive. I'm willing to do the work, but only if there's at least\n> some agreement.\n\nFWIW, I don't consider this to be a very big change from an end-user \nperspective. I don't think the majority of users interact directly with \nthe postmaster, rather they are using systemd, pg_ctl, pg_ctlcluster, etc.\n\nAs for postmaster.pid and postmaster.opts, we have renamed plenty of \nthings in the past so this is just one more. They'd be better and \nclearer as postgresql.pid and postgresql.opts, IMO.\n\nOverall I'm +.5 because I may just be ignorant of the pain this will cause.\n\n> 2) 'master' as a reference to the branch. Personally I be in favor of\n> changing the branch name, but it seems like it'd be better done as a\n> somewhat separate discussion to me, as it affects development\n> practices to some degree.\n\nHappily this has no end-user impact. I think \"main\" is a good \nalternative but I agree this feels like a separate topic.\n\nOne last thing -- are we considering back-patching any/all of this?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 16 Jun 2020 17:14:57 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-16 17:14:57 -0400, David Steele wrote:\n> On 6/15/20 2:22 PM, Andres Freund wrote:\n> > \n> > We've removed the use of \"slave\" from most of the repo (one use\n> > remained, included here), but we didn't do the same for master. In the\n> > attached series I replaced most of the uses.\n> > \n> > 0001: tap tests: s/master/primary/\n> > Pretty clear cut imo.\n> \n> Nothing to argue with here as far as I can see. It's a lot of churn, though,\n> so the sooner it goes in the better so people can update for the next CF.\n\nYea, unless somebody protests I'm planning to push this part soon.\n\n\n> > 0004: code: s/master/$other/\n> > This is most of the remaining uses of master in code. A number of\n> > references to 'master' in the context of toast, a few uses of 'master\n> > copy'. I guess some of these are a bit less clear cut.\n> \n> Not sure I love authoritative, e.g.\n> \n> +\t * fullPageWrites is the authoritative value used by all backends to\n> \n> and\n> \n> +\t * grabbed a WAL insertion lock to read the authoritative value in\n> \n> Possibly \"shared\"?\n\nI don't think shared is necessarily correct for all of these. E.g. in\nthe GetRedoRecPtr() there's two shared values at play, but only one is\n\"authoritative\".\n\n\n> +\t * Create the Tcl interpreter subsidiary to pltcl_hold_interp.\n> \n> Maybe use \"worker\" here? Not much we can do about the Tcl function name,\n> though. It's pretty localized, though, so may not matter much.\n\nI don't think it matters much what we use here\n\n\n> > 0008: docs: WIP multi-master rephrasing.\n> > I like neither the new nor the old language much. I'd welcome input.\n> \n> Why not multi-primary?\n\nMy understanding of primary is that there really can't be two things\nthat are primary in relation to each other. active/active is probably\nthe most common term in use besides multi-master.\n\n\n> One last thing -- are we considering back-patching any/all of this?\n\nI don't think there's a good reason to do so.\n\nThanks for the look!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jun 2020 15:27:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 6/16/20 6:27 PM, Andres Freund wrote:\n> On 2020-06-16 17:14:57 -0400, David Steele wrote:\n>> On 6/15/20 2:22 PM, Andres Freund wrote:\n> \n>>> 0008: docs: WIP multi-master rephrasing.\n>>> I like neither the new nor the old language much. I'd welcome input.\n>>\n>> Why not multi-primary?\n> \n> My understanding of primary is that there really can't be two things\n> that are primary in relation to each other. \n\nWell, I think the same is true for multi-master and that's pretty common.\n\n> active/active is probably\n> the most common term in use besides multi-master.\n\nWorks for me and can always be updated later if we come up with \nsomething better. At least active-active will be easier to search for.\n\n>> One last thing -- are we considering back-patching any/all of this?\n> \n> I don't think there's a good reason to do so.\n\nI was thinking of back-patching pain but if you don't think that's an \nissue then I'm not worried about it.\n\n> Thanks for the look!\n\nYou are welcome!\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 16 Jun 2020 18:59:25 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "\nOn 6/15/20 2:22 PM, Andres Freund wrote:\n> 2) 'master' as a reference to the branch. Personally I be in favor of\n> changing the branch name, but it seems like it'd be better done as a\n> somewhat separate discussion to me, as it affects development\n> practices to some degree.\n>\n\n\nI'm OK with this, but I would need plenty of notice to get the buildfarm\nadjusted.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 19:41:06 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 6/15/20 2:22 PM, Andres Freund wrote:\n>> 2) 'master' as a reference to the branch. Personally I be in favor of\n>> changing the branch name, but it seems like it'd be better done as a\n>> somewhat separate discussion to me, as it affects development\n>> practices to some degree.\n\n> I'm OK with this, but I would need plenty of notice to get the buildfarm\n> adjusted.\n\n\"master\" is the default branch name established by git, is it not? Not\nsomething we picked. I don't feel like we need to be out front of the\nrest of the world in changing that. It'd be a cheaper change than some of\nthe other proposals in this thread, no doubt, but it would still create\nconfusion for new hackers who are used to the standard git convention.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 19:55:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Tue, 16 Jun 2020 at 19:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 6/15/20 2:22 PM, Andres Freund wrote:\n> >> 2) 'master' as a reference to the branch. Personally I be in favor of\n> >> changing the branch name, but it seems like it'd be better done as a\n> >> somewhat separate discussion to me, as it affects development\n> >> practices to some degree.\n>\n> > I'm OK with this, but I would need plenty of notice to get the buildfarm\n> > adjusted.\n>\n> \"master\" is the default branch name established by git, is it not? Not\n> something we picked. I don't feel like we need to be out front of the\n> rest of the world in changing that. It'd be a cheaper change than some of\n> the other proposals in this thread, no doubt, but it would still create\n> confusion for new hackers who are used to the standard git convention.\n>\n\nWhile it is the default I expect that will change soon. Github is planning\non making main the default.\nhttps://www.zdnet.com/article/github-to-replace-master-with-alternative-term-to-avoid-slavery-references/\n\nMany projects are moving from master to main.\n\nI expect it will be less confusing than you think.\n\nDave Cramer\nwww.postgres.rocks\n\nOn Tue, 16 Jun 2020 at 19:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 6/15/20 2:22 PM, Andres Freund wrote:\n>> 2) 'master' as a reference to the branch. Personally I be in favor of\n>> changing the branch name, but it seems like it'd be better done as a\n>> somewhat separate discussion to me, as it affects development\n>> practices to some degree.\n\n> I'm OK with this, but I would need plenty of notice to get the buildfarm\n> adjusted.\n\n\"master\" is the default branch name established by git, is it not? Not\nsomething we picked. I don't feel like we need to be out front of the\nrest of the world in changing that. It'd be a cheaper change than some of\nthe other proposals in this thread, no doubt, but it would still create\nconfusion for new hackers who are used to the standard git convention.While it is the default I expect that will change soon. Github is planning on making main the default. https://www.zdnet.com/article/github-to-replace-master-with-alternative-term-to-avoid-slavery-references/Many projects are moving from master to main.I expect it will be less confusing than you think.Dave Cramerwww.postgres.rocks",
"msg_date": "Tue, 16 Jun 2020 20:31:48 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 2020-Jun-16, Tom Lane wrote:\n\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 6/15/20 2:22 PM, Andres Freund wrote:\n> >> 2) 'master' as a reference to the branch. Personally I be in favor of\n> >> changing the branch name, but it seems like it'd be better done as a\n> >> somewhat separate discussion to me, as it affects development\n> >> practices to some degree.\n> \n> > I'm OK with this, but I would need plenty of notice to get the buildfarm\n> > adjusted.\n> \n> \"master\" is the default branch name established by git, is it not? Not\n> something we picked. I don't feel like we need to be out front of the\n> rest of the world in changing that. It'd be a cheaper change than some of\n> the other proposals in this thread, no doubt, but it would still create\n> confusion for new hackers who are used to the standard git convention.\n\nGit itself is discussing this:\nhttps://public-inbox.org/git/41438A0F-50E4-4E58-A3A7-3DAAECB5576B@jramsay.com.au/T/#t\nand it seems that \"main\" is the winning choice.\n\n(Personally) I would leave master to have root, would leave root to have\ndefault, would leave default to have primary, would leave primary to\nhave base, would leave base to have main, would leave main to have\nmainline.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 20:53:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jun-16, Tom Lane wrote:\n>> \"master\" is the default branch name established by git, is it not? Not\n>> something we picked.\n\n> Git itself is discussing this:\n> https://public-inbox.org/git/41438A0F-50E4-4E58-A3A7-3DAAECB5576B@jramsay.com.au/T/#t\n> and it seems that \"main\" is the winning choice.\n\nOh, interesting. If they do change I'd be happy to follow suit.\nBut let's wait and see what they do, rather than possibly ending\nup with our own private convention.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 21:43:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 3:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jun-16, Tom Lane wrote:\n> >> \"master\" is the default branch name established by git, is it not? Not\n> >> something we picked.\n>\n> > Git itself is discussing this:\n> >\n> https://public-inbox.org/git/41438A0F-50E4-4E58-A3A7-3DAAECB5576B@jramsay.com.au/T/#t\n> > and it seems that \"main\" is the winning choice.\n>\n> Oh, interesting. If they do change I'd be happy to follow suit.\n> But let's wait and see what they do, rather than possibly ending\n> up with our own private convention.\n>\n\nI'm +1 for changing it (with good warning time to handle the buildfarm\nsituation), but also very much +1 for waiting to see exactly what upstream\n(git) decides on and make sure we change to the same. The worst possible\ncombination would be that we change it to something that's *different* than\nupstream ends up with (even if upstream ends up being configurable).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jun 17, 2020 at 3:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jun-16, Tom Lane wrote:\n>> \"master\" is the default branch name established by git, is it not? Not\n>> something we picked.\n\n> Git itself is discussing this:\n> https://public-inbox.org/git/41438A0F-50E4-4E58-A3A7-3DAAECB5576B@jramsay.com.au/T/#t\n> and it seems that \"main\" is the winning choice.\n\nOh, interesting. If they do change I'd be happy to follow suit.\nBut let's wait and see what they do, rather than possibly ending\nup with our own private convention.I'm +1 for changing it (with good warning time to handle the buildfarm situation), but also very much +1 for waiting to see exactly what upstream (git) decides on and make sure we change to the same. The worst possible combination would be that we change it to something that's *different* than upstream ends up with (even if upstream ends up being configurable).-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 17 Jun 2020 10:53:31 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Tue, 2020-06-16 at 19:55 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 6/15/20 2:22 PM, Andres Freund wrote:\n> > > 2) 'master' as a reference to the branch. Personally I be in favor of\n> > > changing the branch name, but it seems like it'd be better done as a\n> > > somewhat separate discussion to me, as it affects development\n> > > practices to some degree.\n> > I'm OK with this, but I would need plenty of notice to get the buildfarm\n> > adjusted.\n> \n> \"master\" is the default branch name established by git, is it not? Not\n> something we picked. I don't feel like we need to be out front of the\n> rest of the world in changing that. It'd be a cheaper change than some of\n> the other proposals in this thread, no doubt, but it would still create\n> confusion for new hackers who are used to the standard git convention.\n\nI have the feeling that all this is going somewhat too far.\n\nI feel fine with removing the term \"slave\", even though I have no qualms\nabout enslaving machines.\n\nBut the term \"master\" is not restricted to slavery. It can just as well\nimply expert knowledge (think academic degree), and it can denote someone\nwith the authority to command (there is nothing wrong with \"servant\", right?\nOr do we have to abolish the term \"server\" too?).\n\nI appreciate that other people's sensitivities might be different, and I\ndon't want to start a fight over it. But renaming things makes the code\nhistory harder to read, so it should be used with moderation.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:06:18 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 8:23 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> We've removed the use of \"slave\" from most of the repo (one use\n> remained, included here), but we didn't do the same for master. In the\n> attached series I replaced most of the uses.\n>\n> 0001: tap tests: s/master/primary/\n> Pretty clear cut imo.\n>\n> 0002: code: s/master/primary/\n> This also includes a few minor other changes (s/in master/on the\n> primary/, a few 'the's added). Perhaps it'd be better to do those\n> separately?\n>\n> 0003: code: s/master/leader/\n> This feels pretty obvious. We've largely used the leader / worker\n> terminology, but there were a few uses of master left.\n>\n> 0004: code: s/master/$other/\n> This is most of the remaining uses of master in code. A number of\n> references to 'master' in the context of toast, a few uses of 'master\n> copy'. I guess some of these are a bit less clear cut.\n>\n> 0005: docs: s/master/primary/\n> These seem mostly pretty straightforward to me. The changes in\n> high-availability.sgml probably deserve the most attention.\n>\n> 0006: docs: s/master/root/\n> Here using root seems a lot better than master anyway (master seems\n> confusing in regard to inheritance scenarios). But perhaps parent\n> would be better? Went with root since it's about the topmost table.\n>\n> 0007: docs: s/master/supervisor/\n> I guess this could be a bit more contentious. Supervisor seems clearer\n> to me, but I can see why people would disagree. See also later point\n> about changes I have not done at this stage.\n>\n> 0008: docs: WIP multi-master rephrasing.\n> I like neither the new nor the old language much. I'd welcome input.\n>\n>\n> After this series there are only two widespread use of 'master' in the\n> tree.\n> 1) 'postmaster'. As changing that would be somewhat invasive, the word\n> is a bit more ambiguous, and it's largely just internal, I've left\n> this alone for now. I personally would rather see this renamed as\n> supervisor, which'd imo actually would also be a lot more\n> descriptive. I'm willing to do the work, but only if there's at least\n> some agreement.\n> 2) 'master' as a reference to the branch. Personally I be in favor of\n> changing the branch name, but it seems like it'd be better done as a\n> somewhat separate discussion to me, as it affects development\n> practices to some degree.\n>\n>\nIn looking at this I realize we also have exactly one thing referred to as\n\"blacklist\" in our codebase, which is the \"enum blacklist\" (and then a\nsmall internal variable in pgindent). AFAICT, it's not actually exposed to\nuserspace anywhere, so we could probably make the attached change to\nblocklist at no \"cost\" (the only thing changed is the name of the hash\ntable, and we definitely change things like that in normal releases with no\nspecific thought on backwards compat).\n\n//Magnus",
"msg_date": "Wed, 17 Jun 2020 12:32:14 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 6/17/20 6:32 AM, Magnus Hagander wrote:\n\n> In looking at this I realize we also have exactly one thing referred to\n> as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and then\n> a small internal variable in pgindent). AFAICT, it's not actually\n> exposed to userspace anywhere, so we could probably make the attached\n> change to blocklist at no \"cost\" (the only thing changed is the name of\n> the hash table, and we definitely change things like that in normal\n> releases with no specific thought on backwards compat).\n\n+1. Though if we are doing that, we should also handle \"whitelist\" too,\nas this attached patch does. It's mostly in comments (with one Perl\nvariable), but I switched the language around to use \"allowed\"\n\nJonathan",
"msg_date": "Wed, 17 Jun 2020 09:32:55 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 6/17/20 6:06 AM, Laurenz Albe wrote:\n> On Tue, 2020-06-16 at 19:55 -0400, Tom Lane wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> On 6/15/20 2:22 PM, Andres Freund wrote:\n>>>> 2) 'master' as a reference to the branch. Personally I be in favor of\n>>>> changing the branch name, but it seems like it'd be better done as a\n>>>> somewhat separate discussion to me, as it affects development\n>>>> practices to some degree.\n>>> I'm OK with this, but I would need plenty of notice to get the buildfarm\n>>> adjusted.\n>>\n>> \"master\" is the default branch name established by git, is it not? Not\n>> something we picked. I don't feel like we need to be out front of the\n>> rest of the world in changing that. It'd be a cheaper change than some of\n>> the other proposals in this thread, no doubt, but it would still create\n>> confusion for new hackers who are used to the standard git convention.\n> \n> I have the feeling that all this is going somewhat too far.\n\nFirst, I +1 the changes Andres proposed overall. In addition to it being\nthe right thing to do, it brings inline a lot of the terminology we have\nbeen using to describe concepts in PostgreSQL for years (e.g.\nprimary/replica).\n\nFor the name of the git branch, I +1 following the convention of the git\nupstream, and make changes based on that. Understandably, it could break\nthings for a bit, but that will occur for a lot of other projects as\nwell and everyone will adopt. We have the benefit that we're just\nbeginning our new development cycle too, so this is a good time to\nintroduce breaking change ;)\n\nJonathan",
"msg_date": "Wed, 17 Jun 2020 09:36:58 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "\nOn 6/17/20 6:32 AM, Magnus Hagander wrote:\n>\n>\n>\n>\n>\n> In looking at this I realize we also have exactly one thing referred\n> to as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and\n> then a small internal variable in pgindent). AFAICT, it's not actually\n> exposed to userspace anywhere, so we could probably make the attached\n> change to blocklist at no \"cost\" (the only thing changed is the name\n> of the hash table, and we definitely change things like that in normal\n> releases with no specific thought on backwards compat).\n>\n>\n\nI'm not sure I like doing s/Black/Block/ here. It reads oddly. There are\ntoo many other uses of Block in the sources. Forbidden might be a better\nsubstitution, or Banned maybe. BanList is even less characters than\nBlackList.\n\n\nI know, bikeshedding here.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:15:40 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 6/17/20 6:32 AM, Magnus Hagander wrote:\n>> In looking at this I realize we also have exactly one thing referred\n>> to as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and\n>> then a small internal variable in pgindent). AFAICT, it's not actually\n>> exposed to userspace anywhere, so we could probably make the attached\n>> change to blocklist at no \"cost\" (the only thing changed is the name\n>> of the hash table, and we definitely change things like that in normal\n>> releases with no specific thought on backwards compat).\n\n> I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are\n> too many other uses of Block in the sources. Forbidden might be a better\n> substitution, or Banned maybe. BanList is even less characters than\n> BlackList.\n\nI think worrying about blacklist/whitelist is carrying things a bit far\nin the first place.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 11:00:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "\nOn 6/17/20 11:00 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 6/17/20 6:32 AM, Magnus Hagander wrote:\n>>> In looking at this I realize we also have exactly one thing referred\n>>> to as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and\n>>> then a small internal variable in pgindent). AFAICT, it's not actually\n>>> exposed to userspace anywhere, so we could probably make the attached\n>>> change to blocklist at no \"cost\" (the only thing changed is the name\n>>> of the hash table, and we definitely change things like that in normal\n>>> releases with no specific thought on backwards compat).\n>> I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are\n>> too many other uses of Block in the sources. Forbidden might be a better\n>> substitution, or Banned maybe. BanList is even less characters than\n>> BlackList.\n> I think worrying about blacklist/whitelist is carrying things a bit far\n> in the first place.\n\n\nFor the small effort and minimal impact involved, I think it's worth\navoiding the bad publicity.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:08:32 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 6/17/20 6:32 AM, Magnus Hagander wrote:\n> >\n> >\n> >\n> >\n> >\n> > In looking at this I realize we also have exactly one thing referred\n> > to as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and\n> > then a small internal variable in pgindent). AFAICT, it's not actually\n> > exposed to userspace anywhere, so we could probably make the attached\n> > change to blocklist at no \"cost\" (the only thing changed is the name\n> > of the hash table, and we definitely change things like that in normal\n> > releases with no specific thought on backwards compat).\n> >\n> >\n>\n> I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are\n> too many other uses of Block in the sources. Forbidden might be a better\n> substitution, or Banned maybe. BanList is even less characters than\n> BlackList.\n>\n>\n> I know, bikeshedding here.\n>\n\nI'd be OK with either of those really -- I went with block because it was\nthe easiest one :)\n\nNot sure the number of characters is the important part :) Banlist does\nmake sense to me for other reasons though -- it's what it is, isn't it? It\nbans those oids from being used in the current session -- I don't think\nthere's any struggle to \"make that sentence work\", which means that seems\nlike the relevant term.\n\nI do think it's worth doing -- it's a small round of changes, and it\ndoesn't change anything user-exposed, so the cost for us is basically zero.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 6/17/20 6:32 AM, Magnus Hagander wrote:\n>\n>\n>\n>\n>\n> In looking at this I realize we also have exactly one thing referred\n> to as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and\n> then a small internal variable in pgindent). AFAICT, it's not actually\n> exposed to userspace anywhere, so we could probably make the attached\n> change to blocklist at no \"cost\" (the only thing changed is the name\n> of the hash table, and we definitely change things like that in normal\n> releases with no specific thought on backwards compat).\n>\n>\n\nI'm not sure I like doing s/Black/Block/ here. It reads oddly. There are\ntoo many other uses of Block in the sources. Forbidden might be a better\nsubstitution, or Banned maybe. BanList is even less characters than\nBlackList.\n\n\nI know, bikeshedding here.I'd be OK with either of those really -- I went with block because it was the easiest one :)Not sure the number of characters is the important part :) Banlist does make sense to me for other reasons though -- it's what it is, isn't it? It bans those oids from being used in the current session -- I don't think there's any struggle to \"make that sentence work\", which means that seems like the relevant term.I do think it's worth doing -- it's a small round of changes, and it doesn't change anything user-exposed, so the cost for us is basically zero. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 17 Jun 2020 18:08:34 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 6/17/20 12:08 PM, Magnus Hagander wrote:\n> \n> \n> On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com <mailto:andrew.dunstan@2ndquadrant.com>>\n> wrote:\n> \n> \n> On 6/17/20 6:32 AM, Magnus Hagander wrote:\n> >\n> >\n> >\n> >\n> >\n> > In looking at this I realize we also have exactly one thing referred\n> > to as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and\n> > then a small internal variable in pgindent). AFAICT, it's not actually\n> > exposed to userspace anywhere, so we could probably make the attached\n> > change to blocklist at no \"cost\" (the only thing changed is the name\n> > of the hash table, and we definitely change things like that in normal\n> > releases with no specific thought on backwards compat).\n> >\n> >\n> \n> I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are\n> too many other uses of Block in the sources. Forbidden might be a better\n> substitution, or Banned maybe. BanList is even less characters than\n> BlackList.\n> \n> \n> I know, bikeshedding here.\n> \n> \n> I'd be OK with either of those really -- I went with block because it\n> was the easiest one :)\n> \n> Not sure the number of characters is the important part :) Banlist does\n> make sense to me for other reasons though -- it's what it is, isn't it?\n> It bans those oids from being used in the current session -- I don't\n> think there's any struggle to \"make that sentence work\", which means\n> that seems like the relevant term.\n> \n> I do think it's worth doing -- it's a small round of changes, and it\n> doesn't change anything user-exposed, so the cost for us is basically zero. \n\n+1. I know post efforts for us to update our language have been\nwell-received, even long after the fact, and given this set has been\nvoiced actively and other fora and, as Magnus states, the cost for us to\nchange it is basically zero, we should just do it.\n\nJonathan",
"msg_date": "Wed, 17 Jun 2020 12:15:25 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 6/17/20 12:08 PM, Magnus Hagander wrote:\n> On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan \n> <andrew.dunstan@2ndquadrant.com <mailto:andrew.dunstan@2ndquadrant.com>> \n> \n> I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are\n> too many other uses of Block in the sources. Forbidden might be a better\n> substitution, or Banned maybe. BanList is even less characters than\n> BlackList.\n> \n> I'd be OK with either of those really -- I went with block because it \n> was the easiest one :)\n> \n> Not sure the number of characters is the important part :) Banlist does \n> make sense to me for other reasons though -- it's what it is, isn't it? \n> It bans those oids from being used in the current session -- I don't \n> think there's any struggle to \"make that sentence work\", which means \n> that seems like the relevant term.\n\nI've seen also seen allowList/denyList as an alternative. I do agree \nthat blockList is a bit confusing since we often use block in a very \ndifferent context.\n\n> I do think it's worth doing -- it's a small round of changes, and it \n> doesn't change anything user-exposed, so the cost for us is basically zero.\n\n+1\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:27:03 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n> 0002: code: s/master/primary/\n> 0003: code: s/master/leader/\n> 0006: docs: s/master/root/\n> 0007: docs: s/master/supervisor/\n\nI'd just like to make the pointer here that there's value in trying to\nuse different terminology for different things. I picked \"leader\" and\n\"worker\" for parallel query and tried to use them consistently because\n\"master\" and \"slave\" were being used widely to refer to physical\nreplication, and I thought it would be clearer to use something\ndifferent, so I did. It's confusing if we use the same word for the\nserver from which others replicate, the table from which others\ninherit, the process which initiates parallelism, and the first\nprocess that is launched across the whole cluster, regardless of\n*which* word we use for those things. So, I think there is every\npossibility that with careful thought, we can actually make things\nclearer, in addition to avoiding the use of terms that are no longer\nwelcome.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 13:59:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jun 15, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n>> 0002: code: s/master/primary/\n>> 0003: code: s/master/leader/\n>> 0006: docs: s/master/root/\n>> 0007: docs: s/master/supervisor/\n\n> I'd just like to make the pointer here that there's value in trying to\n> use different terminology for different things.\n\n+1 for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 14:16:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-17 13:59:26 -0400, Robert Haas wrote:\n> On Mon, Jun 15, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > 0002: code: s/master/primary/\n> > 0003: code: s/master/leader/\n> > 0006: docs: s/master/root/\n> > 0007: docs: s/master/supervisor/\n> \n> I'd just like to make the pointer here that there's value in trying to\n> use different terminology for different things. I picked \"leader\" and\n> \"worker\" for parallel query and tried to use them consistently because\n> \"master\" and \"slave\" were being used widely to refer to physical\n> replication, and I thought it would be clearer to use something\n> different, so I did.\n\nJust to be clear, that's exactly what I tried to do in the above\npatches. E.g. in 0003 I tried to follow the scheme you just\noutlined. There's a part of that patch that addresses pg_dump, but most\nof the rest is just parallelism related pieces that ended up using\nmaster, even though leader is the more widely used term. I assume you\nwere just saying that the above use of different terms is actually\nhelpful:\n\n> It's confusing if we use the same word for the server from which\n> others replicate, the table from which others inherit, the process\n> which initiates parallelism, and the first process that is launched\n> across the whole cluster, regardless of *which* word we use for those\n> things. So, I think there is every possibility that with careful\n> thought, we can actually make things clearer, in addition to avoiding\n> the use of terms that are no longer welcome.\n\nWith which I wholeheartedly agree.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jun 2020 11:18:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 01:59:26PM -0400, Robert Haas wrote:\n> On Mon, Jun 15, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > 0002: code: s/master/primary/\n> > 0003: code: s/master/leader/\n> > 0006: docs: s/master/root/\n> > 0007: docs: s/master/supervisor/\n> \n> I'd just like to make the pointer here that there's value in trying to\n> use different terminology for different things. I picked \"leader\" and\n> \"worker\" for parallel query and tried to use them consistently because\n> \"master\" and \"slave\" were being used widely to refer to physical\n> replication, and I thought it would be clearer to use something\n> different, so I did. It's confusing if we use the same word for the\n> server from which others replicate, the table from which others\n> inherit, the process which initiates parallelism, and the first\n> process that is launched across the whole cluster, regardless of\n> *which* word we use for those things. So, I think there is every\n> possibility that with careful thought, we can actually make things\n> clearer, in addition to avoiding the use of terms that are no longer\n> welcome.\n\nI think the question is whether we can improve our terms as part of this\nrewording, or if we make them worse. When we got rid of slave and made\nit standby, I think we made things worse since many of the replicas were\nnot functioning for the purpose of standby. Standby is a role, not a\nstatus, while replica is a status.\n\nThe other issue is how the terms interlink with other terms. When we\nused master/slave, multi-master matched the wording, but replication\ndidn't match. If we go with replica, replication works, and\nprimary/replica kind of fits, e.g., master/replica does not.\nMulti-master then no longer fits, multi-primary sounds odd, and\nactive-active doesn't match, though active-active is not used as much as\nprimary/replica, so maybe that is OK. Ideally we would have all terms\nmatching, but maybe that is impossible.\n\nMy point is that these terms are symbolic (similes) --- the new terms\nshould link to their roles and to other terms in a logical way.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 17 Jun 2020 18:24:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Hi,\n\nI've pushed most of the changes.\n\nOn 2020-06-16 18:59:25 -0400, David Steele wrote:\n> On 6/16/20 6:27 PM, Andres Freund wrote:\n> > On 2020-06-16 17:14:57 -0400, David Steele wrote:\n> > > On 6/15/20 2:22 PM, Andres Freund wrote:\n> > \n> > > > 0008: docs: WIP multi-master rephrasing.\n> > > > I like neither the new nor the old language much. I'd welcome input.\n> > > \n> > > Why not multi-primary?\n> > \n> > My understanding of primary is that there really can't be two things\n> > that are primary in relation to each other.\n> \n> Well, I think the same is true for multi-master and that's pretty common.\n> \n> > active/active is probably\n> > the most common term in use besides multi-master.\n> \n> Works for me and can always be updated later if we come up with something\n> better. At least active-active will be easier to search for.\n\nWhat do you think about the attached?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 8 Jul 2020 13:39:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 7/8/20 4:39 PM, Andres Freund wrote:\n> Hi,\n> \n> I've pushed most of the changes.\n> \n> On 2020-06-16 18:59:25 -0400, David Steele wrote:\n>> On 6/16/20 6:27 PM, Andres Freund wrote:\n>>> On 2020-06-16 17:14:57 -0400, David Steele wrote:\n>>>> On 6/15/20 2:22 PM, Andres Freund wrote:\n>>>\n>>>>> 0008: docs: WIP multi-master rephrasing.\n>>>>> I like neither the new nor the old language much. I'd welcome input.\n>>>>\n>>>> Why not multi-primary?\n>>>\n>>> My understanding of primary is that there really can't be two things\n>>> that are primary in relation to each other.\n>>\n>> Well, I think the same is true for multi-master and that's pretty common.\n>>\n>>> active/active is probably\n>>> the most common term in use besides multi-master.\n>>\n>> Works for me and can always be updated later if we come up with something\n>> better. At least active-active will be easier to search for.\n> \n> What do you think about the attached?\n\nI think this phrasing in the original/updated version is pretty awkward:\n\n+ A standby server that cannot be connected to until it is promoted to a\n+ primary server is called a ...\n\nHow about:\n\n+ A standby server that must be promoted to a primary server before\n+ accepting connections is called a ...\n\nOther than that it looks good to me.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 8 Jul 2020 17:09:42 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 2020-Jul-08, David Steele wrote:\n\n> On 7/8/20 4:39 PM, Andres Freund wrote:\n\n> I think this phrasing in the original/updated version is pretty awkward:\n> \n> + A standby server that cannot be connected to until it is promoted to a\n> + primary server is called a ...\n\nYeah.\n\n> How about:\n> \n> + A standby server that must be promoted to a primary server before\n> + accepting connections is called a ...\n\nHow about just reducing it to \"A standby server that doesn't accept\nconnection is called a ...\"? We don't really need to explain that if\nyou do promote the standby it will start accept connections -- do we?\nIt should be pretty obvious if you promote a standby, it will cease to\nbe a standby in the first place. This verbiage seems excessive.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jul 2020 17:17:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On 7/8/20 5:17 PM, Alvaro Herrera wrote:\n> On 2020-Jul-08, David Steele wrote:\n> \n>> On 7/8/20 4:39 PM, Andres Freund wrote:\n> \n>> I think this phrasing in the original/updated version is pretty awkward:\n>>\n>> + A standby server that cannot be connected to until it is promoted to a\n>> + primary server is called a ...\n> \n> Yeah.\n> \n>> How about:\n>>\n>> + A standby server that must be promoted to a primary server before\n>> + accepting connections is called a ...\n> \n> How about just reducing it to \"A standby server that doesn't accept\n> connection is called a ...\"? We don't really need to explain that if\n> you do promote the standby it will start accept connections -- do we?\n> It should be pretty obvious if you promote a standby, it will cease to\n> be a standby in the first place. This verbiage seems excessive.\n\nWorks for me.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 8 Jul 2020 17:19:41 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 9:27 AM David Steele <david@pgmasters.net> wrote:\n\n> On 6/17/20 12:08 PM, Magnus Hagander wrote:\n> > On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com <mailto:andrew.dunstan@2ndquadrant.com>>\n>\n> >\n> > I'm not sure I like doing s/Black/Block/ here. It reads oddly. There\n> are\n> > too many other uses of Block in the sources. Forbidden might be a\n> better\n> > substitution, or Banned maybe. BanList is even less characters than\n> > BlackList.\n> >\n> > I'd be OK with either of those really -- I went with block because it\n> > was the easiest one :)\n> >\n> > Not sure the number of characters is the important part :) Banlist does\n> > make sense to me for other reasons though -- it's what it is, isn't it?\n> > It bans those oids from being used in the current session -- I don't\n> > think there's any struggle to \"make that sentence work\", which means\n> > that seems like the relevant term.\n>\n> I've seen also seen allowList/denyList as an alternative. I do agree\n> that blockList is a bit confusing since we often use block in a very\n> different context.\n>\n\n+1 for allowList/denyList as alternative\n\n> I do think it's worth doing -- it's a small round of changes, and it\n> > doesn't change anything user-exposed, so the cost for us is basically\n> zero.\n>\n> +1\n\n\nAgree number of occurrences for whitelist and blacklist are not many, so\ncleaning these would be helpful and patches already proposed for it\n\ngit grep whitelist | wc -l\n10\ngit grep blacklist | wc -l\n40\n\nThanks a lot for language cleanups. Greenplum, fork of PostgreSQL, wishes\nto perform similar cleanups and upstream doing it really helps us\ndownstream.\n\nOn Wed, Jun 17, 2020 at 9:27 AM David Steele <david@pgmasters.net> wrote:On 6/17/20 12:08 PM, Magnus Hagander wrote:\n> On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan \n> <andrew.dunstan@2ndquadrant.com <mailto:andrew.dunstan@2ndquadrant.com>> \n> \n> I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are\n> too many other uses of Block in the sources. Forbidden might be a better\n> substitution, or Banned maybe. BanList is even less characters than\n> BlackList.\n> \n> I'd be OK with either of those really -- I went with block because it \n> was the easiest one :)\n> \n> Not sure the number of characters is the important part :) Banlist does \n> make sense to me for other reasons though -- it's what it is, isn't it? \n> It bans those oids from being used in the current session -- I don't \n> think there's any struggle to \"make that sentence work\", which means \n> that seems like the relevant term.\n\nI've seen also seen allowList/denyList as an alternative. I do agree \nthat blockList is a bit confusing since we often use block in a very \ndifferent context.+1 for allowList/denyList as alternative\n> I do think it's worth doing -- it's a small round of changes, and it \n> doesn't change anything user-exposed, so the cost for us is basically zero.\n\n+1Agree number of occurrences for whitelist and blacklist are not many, so cleaning these would be helpful and patches already proposed for itgit grep whitelist | wc -l10git grep blacklist | wc -l40Thanks a lot for language cleanups. Greenplum, fork of PostgreSQL, wishes to perform similar cleanups and upstream doing it really helps us downstream.",
"msg_date": "Thu, 20 Aug 2020 11:34:08 -0700",
"msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 10:32 PM Magnus Hagander <magnus@hagander.net> wrote:\n> In looking at this I realize we also have exactly one thing referred to as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and then a small internal variable in pgindent). AFAICT, it's not actually exposed to userspace anywhere, so we could probably make the attached change to blocklist at no \"cost\" (the only thing changed is the name of the hash table, and we definitely change things like that in normal releases with no specific thought on backwards compat).\n\n+1\n\nHmm, can we find a more descriptive name for this mechanism? What\nabout calling it the \"uncommitted enum table\"? See attached.",
"msg_date": "Thu, 22 Oct 2020 10:22:32 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Wed, Oct 21, 2020 at 11:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jun 17, 2020 at 10:32 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > In looking at this I realize we also have exactly one thing referred to as \"blacklist\" in our codebase, which is the \"enum blacklist\" (and then a small internal variable in pgindent). AFAICT, it's not actually exposed to userspace anywhere, so we could probably make the attached change to blocklist at no \"cost\" (the only thing changed is the name of the hash table, and we definitely change things like that in normal releases with no specific thought on backwards compat).\n>\n> +1\n>\n> Hmm, can we find a more descriptive name for this mechanism? What\n> about calling it the \"uncommitted enum table\"? See attached.\n\nThanks for picking this one up again!\n\nAgreed, that's a much better choice.\n\nThe term itself is a bit of a mouthful, but it does describe what it\nis in a much more clear way than what the old term did anyway.\n\nMaybe consider just calling it \"uncomitted enums\", which would as a\nbonus have it not end up talking about uncommittedenumtablespace which\ngets hits on searches for tablespace.. Though I'm not sure that's\nimportant.\n\nI'm +1 to the change with or without that adjustment.\n\nAs for the code, I note that:\n+ /* Set up the enum table if not already done in this transaction */\n\nforgets to say it's *uncomitted* enum table -- which is the important\npart of I believe.\n\nAnd\n+ * Test if the given enum value is in the table of blocked enums.\n\nshould probably talk about uncommitted rather than blocked?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 3 Nov 2020 16:10:09 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 4:10 AM Magnus Hagander <magnus@hagander.net> wrote:\n> On Wed, Oct 21, 2020 at 11:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Hmm, can we find a more descriptive name for this mechanism? What\n> > about calling it the \"uncommitted enum table\"? See attached.\n>\n> Thanks for picking this one up again!\n>\n> Agreed, that's a much better choice.\n>\n> The term itself is a bit of a mouthful, but it does describe what it\n> is in a much more clear way than what the old term did anyway.\n>\n> Maybe consider just calling it \"uncomitted enums\", which would as a\n> bonus have it not end up talking about uncommittedenumtablespace which\n> gets hits on searches for tablespace.. Though I'm not sure that's\n> important.\n>\n> I'm +1 to the change with or without that adjustment.\n\nCool. I went with your suggestion.\n\n> As for the code, I note that:\n> + /* Set up the enum table if not already done in this transaction */\n>\n> forgets to say it's *uncomitted* enum table -- which is the important\n> part of I believe.\n\nFixed.\n\n> And\n> + * Test if the given enum value is in the table of blocked enums.\n>\n> should probably talk about uncommitted rather than blocked?\n\nFixed.\n\nAnd pushed.\n\n\n",
"msg_date": "Tue, 5 Jan 2021 12:42:10 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n\n> In looking at this I realize we also have exactly one thing referred to as\n> \"blacklist\" in our codebase, which is the \"enum blacklist\" (and then a\n> small internal variable in pgindent).\n\nHere's a patch that renames the @whitelist and %blacklist variables in\npgindent to @additional and %excluded, and adjusts the comments to\nmatch.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law",
"msg_date": "Tue, 05 Jan 2021 00:12:39 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 1:12 PM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> > In looking at this I realize we also have exactly one thing referred to as\n> > \"blacklist\" in our codebase, which is the \"enum blacklist\" (and then a\n> > small internal variable in pgindent).\n>\n> Here's a patch that renames the @whitelist and %blacklist variables in\n> pgindent to @additional and %excluded, and adjusts the comments to\n> match.\n\nPushed. Thanks!\n\n\n",
"msg_date": "Tue, 5 Jan 2021 13:27:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> On Tue, Jan 5, 2021 at 1:12 PM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n>> Magnus Hagander <magnus@hagander.net> writes:\n>> > In looking at this I realize we also have exactly one thing referred to as\n>> > \"blacklist\" in our codebase, which is the \"enum blacklist\" (and then a\n>> > small internal variable in pgindent).\n>>\n>> Here's a patch that renames the @whitelist and %blacklist variables in\n>> pgindent to @additional and %excluded, and adjusts the comments to\n>> match.\n>\n> Pushed. Thanks!\n\nThanks! Just after sending that, I thought to grep for \"white\\W*list\"\nas well, and found a few more occurrences that were trivially reworded,\nper the attached patch.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen",
"msg_date": "Tue, 05 Jan 2021 00:44:13 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 1:44 PM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Thanks! Just after sending that, I thought to grep for \"white\\W*list\"\n> as well, and found a few more occurrences that were trivially reworded,\n> per the attached patch.\n\nPushed.\n\n\n",
"msg_date": "Tue, 5 Jan 2021 14:01:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: language cleanups in code and docs"
}
] |
[
{
"msg_contents": "Internally pg_dump have capability to filter the table data to dump by same\nfilter clause but it have no interface to use it and the patch here [1]\nadds interface to it but it have at-least two issue, one is error message\nin case of incorrect where clause specification is somehow hard to debug\nand strange to pg_dump .Other issue is it applies the same filter clause to\nmultiple tables if pattern matching return multiple tables and it seems\nundesired behavior to me because mostly we don’t want to applied the same\nwhere clause specification to multiple table. The attached patch contain a\nfix for both issue\n\n[1].\nhttps://www.postgresql.org/message-id/flat/CAGiT_HNav5B=OfCdfyFoqTa+oe5W1vG=PXkTETCxXg4kcUTktA@mail.gmail.com\n\n\nregards\n\nSurafel",
"msg_date": "Mon, 15 Jun 2020 23:26:13 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_dump --where option"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nHi\r\n\r\nI had a look at the patch and it cleanly applies to postgres master branch. I tried to do a quick test on the new \"where clause\" functionality and for the most part it does the job as described and I'm sure some people will find this feature useful to their database dump needs. However I tried the feature with a case where I have a subquery in the where clause, but it seems to be failing to dump the data. I ran the pg_dump like:\r\n\r\n $ pg_dump -d cary --where=\"test1:a3 = ( select max(aa1) from test2 )\" > testdump2\r\n $ pg_dump: error: processing of table \"public.test1\" failed\r\n\r\nboth test1 and test2 exist in the database and the same subquery works under psql.\r\n \r\nI also notice that the regression tests for pg_dump is failing due to the patch, I think it is worth looking into the failure messages and also add some test cases on the new \"where\" clause to ensure that it can cover as many use cases as possible.\r\n\r\nthank you\r\nBest regards\r\n\r\nCary Huang\r\n-------------\r\nHighGo Software Inc. (Canada)\r\ncary.huang@highgo.ca\r\nwww.highgo.ca",
"msg_date": "Fri, 10 Jul 2020 00:03:57 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump --where option"
},
{
"msg_contents": "> On 10 Jul 2020, at 02:03, Cary Huang <cary.huang@highgo.ca> wrote:\n> \n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: tested, failed\n> Spec compliant: tested, failed\n> Documentation: tested, failed\n> \n> Hi\n> \n> I had a look at the patch and it cleanly applies to postgres master branch. I tried to do a quick test on the new \"where clause\" functionality and for the most part it does the job as described and I'm sure some people will find this feature useful to their database dump needs. However I tried the feature with a case where I have a subquery in the where clause, but it seems to be failing to dump the data. I ran the pg_dump like:\n> \n> $ pg_dump -d cary --where=\"test1:a3 = ( select max(aa1) from test2 )\" > testdump2\n> $ pg_dump: error: processing of table \"public.test1\" failed\n> \n> both test1 and test2 exist in the database and the same subquery works under psql.\n> \n> I also notice that the regression tests for pg_dump is failing due to the patch, I think it is worth looking into the failure messages and also add some test cases on the new \"where\" clause to ensure that it can cover as many use cases as possible.\n\nAs this is being reviewed, but time is running out in this CF, I'm moving this\nto the next CF. The entry will be moved to Waiting for Author based on the\nabove review.\n\ncheers ./daniel\n\n",
"msg_date": "Fri, 31 Jul 2020 00:38:10 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump --where option"
},
{
"msg_contents": "On Fri, Jul 31, 2020 at 1:38 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>\n> > $ pg_dump -d cary --where=\"test1:a3 = ( select max(aa1) from test2 )\" >\n> testdump2\n> > $ pg_dump: error: processing of table \"public.test1\" failed\n> >\n> > both test1 and test2 exist in the database and the same subquery works\n> under psql.\n> >\n>\n\nThis is because pg_dump uses schema-qualified object name I add\ndocumentation about to use schema-qualified name when using sub query\n\n\n\n\n> > I also notice that the regression tests for pg_dump is failing due to\n> the patch, I think it is worth looking into the failure messages and also\n> add some test cases on the new \"where\" clause to ensure that it can cover\n> as many use cases as possible.\n>\n>\nI fix regression test failure on the attached patch.\n\nI don’t add tests because single-quotes and double-quotes are\nmeta-characters for PROVE too.\n\nregards\n\nSurafel",
"msg_date": "Mon, 14 Sep 2020 13:04:56 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump --where option"
},
{
"msg_contents": "> On 14 Sep 2020, at 12:04, Surafel Temesgen <surafel3000@gmail.com> wrote:\n> On Fri, Jul 31, 2020 at 1:38 AM Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> wrote:\n> \n> > $ pg_dump -d cary --where=\"test1:a3 = ( select max(aa1) from test2 )\" > testdump2\n> > $ pg_dump: error: processing of table \"public.test1\" failed\n> > \n> > both test1 and test2 exist in the database and the same subquery works under psql.\n> This is because pg_dump uses schema-qualified object name I add documentation about to use schema-qualified name when using sub query\n\nDocumenting something is well and good, but isn't allowing arbitrary SQL\ncopy-pasted into the query (which isn't checked for schema qualification)\nopening up for some of the ill-effects of CVE-2018-1058?\n\n> I don’t add tests because single-quotes and double-quotes are meta-characters for PROVE too.\n\nI'm not sure I follow. Surely tests can be added for this functionality?\n\n\nHow should one invoke this on a multibyte char table name which require\nquoting, like --table='\"x\"' (where x would be an mb char). Reading the\noriginal thread and trying the syntax from there, it's also not clear how table\nnames with colons should be handled. I know they're not common, but if they're\nnot supported then the tradeoff should be documented.\n\nA nearby thread [0] is adding functionality to read from an input file due to\nthe command line being too short. Consumers of this might not run into the\nissues mentioned there, but it doesn't seem far fetched that someone who does\nalso adds a small WHERE clause too. Maybe these patches should join forces?\n\ncheers ./daniel\n\n[0] CAFj8pRB10wvW0CC9Xq=1XDs=zCQxer3cbLcNZa+qiX4cUH-G_A@mail.gmail.com\n\n",
"msg_date": "Mon, 14 Sep 2020 17:00:19 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump --where option"
},
{
"msg_contents": "On Mon, Sep 14, 2020 at 05:00:19PM +0200, Daniel Gustafsson wrote:\n> I'm not sure I follow. Surely tests can be added for this functionality?\n\nWe should have tests for that. I can see that this has not been\nanswered in two weeks, so this has been marked as returned with\nfeedback in the CF app.\n--\nMichael",
"msg_date": "Wed, 30 Sep 2020 15:18:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump --where option"
}
] |
[
{
"msg_contents": "While I was updating the snowball code, I noticed something strange. In \nsrc/backend/snowball/Makefile:\n\n# first column is language name and also name of dictionary for \nnot-all-ASCII\n# words, second is name of dictionary for all-ASCII words\n# Note order dependency: use of some other language as ASCII dictionary\n# must come after creation of that language\nLANGUAGES= \\\n arabic arabic \\\n basque basque \\\n catalan catalan \\\netc.\n\nThere are two cases where these two columns are not the same:\n\n hindi english \\\n russian english \\\n\nThe second one is old; the first one I added using the second one as \nexample. But I wonder what the rationale for this is. Maybe for hindi \none could make some kind of cultural argument, but for russian this \nseems entirely arbitrary. Perhaps using \"simple\" would be more sound here.\n\nMoreover, AFAIK, the following other languages do not use Latin-based \nalphabets:\n\n arabic arabic \\\n greek greek \\\n nepali nepali \\\n tamil tamil \\\n\nSo I wonder by what rationale they use their own stemmer for the ASCII \nfallback, which is probably not going to produce anything significant.\n\nWhat's the general idea here?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 10:16:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "snowball ASCII stemmer configuration"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> There are two cases where these two columns are not the same:\n\n> hindi english \\\n> russian english \\\n\n> The second one is old; the first one I added using the second one as \n> example. But I wonder what the rationale for this is. Maybe for hindi \n> one could make some kind of cultural argument, but for russian this \n> seems entirely arbitrary.\n\nPerhaps it is, but we have actual Russians who think it's a good idea.\nI recall questioning that point some years ago, and Oleg replied that\nthey'd done that intentionally because (a) technical Russian uses a lot\nof English words, and (b) it's easy to tell which is which thanks to\nthe disjoint letter sets.\n\nWhether the same is true for Hindi, I have no idea.\n\n> Moreover, AFAIK, the following other languages do not use Latin-based \n> alphabets:\n\n> arabic arabic \\\n> greek greek \\\n> nepali nepali \\\n> tamil tamil \\\n\nHmm. I think all of those entries are ones that got added by me while\nabsorbing post-2007 Snowball updates, and I confess that I did not think\nabout this point. Maybe these should be changed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 09:53:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: snowball ASCII stemmer configuration"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 4:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > There are two cases where these two columns are not the same:\n>\n> > hindi english \\\n> > russian english \\\n>\n> > The second one is old; the first one I added using the second one as\n> > example. But I wonder what the rationale for this is. Maybe for hindi\n> > one could make some kind of cultural argument, but for russian this\n> > seems entirely arbitrary.\n>\n> Perhaps it is, but we have actual Russians who think it's a good idea.\n> I recall questioning that point some years ago, and Oleg replied that\n> they'd done that intentionally because (a) technical Russian uses a lot\n> of English words, and (b) it's easy to tell which is which thanks to\n> the disjoint letter sets.\n>\n>\nYes, you are right.\n\n\n> Whether the same is true for Hindi, I have no idea.\n>\n> > Moreover, AFAIK, the following other languages do not use Latin-based\n> > alphabets:\n>\n> > arabic arabic \\\n> > greek greek \\\n> > nepali nepali \\\n> > tamil tamil \\\n>\n> Hmm. I think all of those entries are ones that got added by me while\n> absorbing post-2007 Snowball updates, and I confess that I did not think\n> about this point. Maybe these should be changed.\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Tue, Jun 16, 2020 at 4:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> There are two cases where these two columns are not the same:\n\n> hindi english \\\n> russian english \\\n\n> The second one is old; the first one I added using the second one as \n> example. But I wonder what the rationale for this is. Maybe for hindi \n> one could make some kind of cultural argument, but for russian this \n> seems entirely arbitrary.\n\nPerhaps it is, but we have actual Russians who think it's a good idea.\nI recall questioning that point some years ago, and Oleg replied that\nthey'd done that intentionally because (a) technical Russian uses a lot\nof English words, and (b) it's easy to tell which is which thanks to\nthe disjoint letter sets.\nYes, you are right. \nWhether the same is true for Hindi, I have no idea.\n\n> Moreover, AFAIK, the following other languages do not use Latin-based \n> alphabets:\n\n> arabic arabic \\\n> greek greek \\\n> nepali nepali \\\n> tamil tamil \\\n\nHmm. I think all of those entries are ones that got added by me while\nabsorbing post-2007 Snowball updates, and I confess that I did not think\nabout this point. Maybe these should be changed.\n\n regards, tom lane\n\n\n-- Postgres Professional: http://www.postgrespro.comThe Russian Postgres Company",
"msg_date": "Tue, 16 Jun 2020 17:32:19 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: snowball ASCII stemmer configuration"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Moreover, AFAIK, the following other languages do not use Latin-based \n>> alphabets:\n\n>> arabic arabic \\\n>> greek greek \\\n>> nepali nepali \\\n>> tamil tamil \\\n\n> Hmm. I think all of those entries are ones that got added by me while\n> absorbing post-2007 Snowball updates, and I confess that I did not think\n> about this point. Maybe these should be changed.\n\nAfter further reflection, I think these are indeed mistakes and we should\nchange them all. The argument for the Russian/English case, AIUI, is\n\"if we come across an all-ASCII word, it is most certainly not Russian,\nand the most likely Latin-based language is English\". Given the world\nas it is, I think the same argument works for all non-Latin-alphabet\nlanguages. Obviously specific applications might have a different idea\nof the best fallback language, but that's why we let users make their\nown text search configurations. For general-purpose use, falling back\nto English seems reasonable. And we can be dead certain that applying\na Greek stemmer to an ASCII word will do nothing useful, so the\nconfiguration choice shown above is unhelpful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 10:37:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: snowball ASCII stemmer configuration"
},
{
"msg_contents": "\n\n> On Jun 16, 2020, at 7:37 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Moreover, AFAIK, the following other languages do not use Latin-based \n>>> alphabets:\n> \n>>> arabic arabic \\\n>>> greek greek \\\n>>> nepali nepali \\\n>>> tamil tamil \\\n> \n>> Hmm. I think all of those entries are ones that got added by me while\n>> absorbing post-2007 Snowball updates, and I confess that I did not think\n>> about this point. Maybe these should be changed.\n> \n> After further reflection, I think these are indeed mistakes and we should\n> change them all. The argument for the Russian/English case, AIUI, is\n> \"if we come across an all-ASCII word, it is most certainly not Russian,\n> and the most likely Latin-based language is English\". Given the world\n> as it is, I think the same argument works for all non-Latin-alphabet\n> languages. Obviously specific applications might have a different idea\n> of the best fallback language, but that's why we let users make their\n> own text search configurations. For general-purpose use, falling back\n> to English seems reasonable. And we can be dead certain that applying\n> a Greek stemmer to an ASCII word will do nothing useful, so the\n> configuration choice shown above is unhelpful.\n\nI am a bit surprised to see that you are right about this, because non-latin languages often have transliteration/romanization schemes for writing the language in the Latin alphabet, developed before computers had wide spread adoption of non-ASCII character sets, and still in use today for text messaging. I expected to find stemming rules for transliterated words, but can't find any indication of that, neither in the postgres sources, nor in the snowball sources I pulled from their repo. Is there some architectural separation of stemming from transliteration such that we'd never need to worry about it? If snowball ever published stemmers for transliterated text, we might have to revisit this issue, but for now your proposed change sounds fine to me.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 08:25:03 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: snowball ASCII stemmer configuration"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> I am a bit surprised to see that you are right about this, because non-latin languages often have transliteration/romanization schemes for writing the language in the Latin alphabet, developed before computers had wide spread adoption of non-ASCII character sets, and still in use today for text messaging. I expected to find stemming rules for transliterated words, but can't find any indication of that, neither in the postgres sources, nor in the snowball sources I pulled from their repo. Is there some architectural separation of stemming from transliteration such that we'd never need to worry about it? If snowball ever published stemmers for transliterated text, we might have to revisit this issue, but for now your proposed change sounds fine to me.\n\nAgreed, if the Snowball stemmers worked on romanized texts then the\nsituation would be different. But they don't, AFAICS. Don't know\nif that is architectural, or a policy decision, or just lack of\nround tuits.\n\nThe thing that I actually find a bit shaky in this area is our\narchitectural decision to route words to different dictionaries\ndepending on whether they are all-ASCII or not. AIUI that was\ndone purely on the basis of the Russian/English case; it would\nfail badly if say you wanted to separate Russian from French.\nHowever, I have no great desire to revisit that design right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 11:40:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: snowball ASCII stemmer configuration"
},
{
"msg_contents": "On 2020-06-16 16:37, Tom Lane wrote:\n> After further reflection, I think these are indeed mistakes and we should\n> change them all. The argument for the Russian/English case, AIUI, is\n> \"if we come across an all-ASCII word, it is most certainly not Russian,\n> and the most likely Latin-based language is English\". Given the world\n> as it is, I think the same argument works for all non-Latin-alphabet\n> languages. Obviously specific applications might have a different idea\n> of the best fallback language, but that's why we let users make their\n> own text search configurations. For general-purpose use, falling back\n> to English seems reasonable. And we can be dead certain that applying\n> a Greek stemmer to an ASCII word will do nothing useful, so the\n> configuration choice shown above is unhelpful.\n\nDo we *have* to have an ASCII stemmer that corresponds to an actual \nlanguage? Couldn't we use the simple stemmer or no stemmer at all?\n\nIn my experience, ASCII text in, say, Russian or Greek will typically be \nacronyms or brand names or the like, and there doesn't seem to be a \ngreat need to stem that kind of thing. Just doing nothing seems at \nleast as good.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 11:46:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: snowball ASCII stemmer configuration"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Do we *have* to have an ASCII stemmer that corresponds to an actual \n> language? Couldn't we use the simple stemmer or no stemmer at all?\n> In my experience, ASCII text in, say, Russian or Greek will typically be \n> acronyms or brand names or the like, and there doesn't seem to be a \n> great need to stem that kind of thing. Just doing nothing seems at \n> least as good.\n\nWell, I have no horse in this race. But the reason it's like this for\nRussian is that Oleg, Teodor, and crew set it up that way ages ago.\nI'd tend to defer to their opinion about what's the most usable\nconfiguration for Russian. You could certainly argue that the situation\nis different for $other-language ... but without some hard evidence for\nthat position, making these cases all behave similarly seems like a\nreasonable approach.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jun 2020 09:44:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: snowball ASCII stemmer configuration"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached patch proposes $Subject feature which forces the system into\nread-only\nmode where insert write-ahead log will be prohibited until ALTER SYSTEM READ\nWRITE executed.\n\nThe high-level goal is to make the availability/scale-out situation\nbetter. The feature\nwill help HA setup where the master server needs to stop accepting WAL\nwrites\nimmediately and kick out any transaction expecting WAL writes at the end,\nin case\nof network down on master or replication connections failures.\n\nFor example, this feature allows for a controlled switchover without\nneeding to shut\ndown the master. You can instead make the master read-only, wait until the\nstandby\ncatches up, and then promote the standby. The master remains available for\nread\nqueries throughout, and also for WAL streaming, but without the possibility\nof any\nnew write transactions. After switchover is complete, the master can be\nshut down\nand brought back up as a standby without needing to use pg_rewind.\n(Eventually, it\nwould be nice to be able to make the read-only master into a standby\nwithout having\nto restart it, but that is a problem for another patch.)\n\nThis might also help in failover scenarios. For example, if you detect that\nthe master\nhas lost network connectivity to the standby, you might make it read-only\nafter 30 s,\nand promote the standby after 60 s, so that you never have two writable\nmasters at\nthe same time. In this case, there's still some split-brain, but it's still\nbetter than what\nwe have now.\n\nDesign:\n----------\nThe proposed feature is built atop of super barrier mechanism commit[1] to\ncoordinate\nglobal state changes to all active backends. Backends which executed\nALTER SYSTEM READ { ONLY | WRITE } command places request to checkpointer\nprocess to change the requested WAL read/write state aka WAL prohibited and\nWAL\npermitted state respectively. When the checkpointer process sees the WAL\nprohibit\nstate change request, it emits a global barrier and waits until all\nbackends that\nparticipate in the ProcSignal absorbs it. Once it has done the WAL\nread/write state in\nshare memory and control file will be updated so that XLogInsertAllowed()\nreturns\naccordingly.\n\nIf there are open transactions that have acquired an XID, the sessions are\nkilled\nbefore the barrier is absorbed. They can't commit without writing WAL, and\nthey\ncan't abort without writing WAL, either, so we must at least abort the\ntransaction. We\ndon't necessarily need to kill the session, but it's hard to avoid in all\ncases because\n(1) if there are subtransactions active, we need to force the top-level\nabort record to\nbe written immediately, but we can't really do that while keeping the\nsubtransactions\non the transaction stack, and (2) if the session is idle, we also need the\ntop-level abort\nrecord to be written immediately, but can't send an error to the client\nuntil the next\ncommand is issued without losing wire protocol synchronization. For now, we\njust use\nFATAL to kill the session; maybe this can be improved in the future.\n\nOpen transactions that don't have an XID are not killed, but will get an\nERROR if they\ntry to acquire an XID later, or if they try to write WAL without acquiring\nan XID (e.g. VACUUM).\nTo make that happen, the patch adds a new coding rule: a critical section\nthat will write\nWAL must be preceded by a call to CheckWALPermitted(),\nAssertWALPermitted(), or\nAssertWALPermitted_HaveXID(). The latter variants are used when we know for\ncertain\nthat inserting WAL here must be OK, either because we have an XID (we would\nhave\nbeen killed by a change to read-only if one had occurred) or for some other\nreason.\n\nThe ALTER SYSTEM READ WRITE command can be used to reverse the effects of\nALTER SYSTEM READ ONLY. Both ALTER SYSTEM READ ONLY and ALTER\nSYSTEM READ WRITE update not only the shared memory state but also the\ncontrol\nfile, so that changes survive a restart.\n\nThe transition between read-write and read-only is a pretty major\ntransition, so we emit\nlog message for each successful execution of a ALTER SYSTEM READ {ONLY |\nWRITE}\ncommand. Also, we have added a new GUC system_is_read_only which returns\n\"on\"\nwhen the system is in WAL prohibited state or recovery.\n\nAnother part of the patch that quite uneasy and need a discussion is that\nwhen the\nshutdown in the read-only state we do skip shutdown checkpoint and at a\nrestart, first\nstartup recovery will be performed and latter the read-only state will be\nrestored to\nprohibit further WAL write irrespective of recovery checkpoint succeed or\nnot. The\nconcern is here if this startup recovery checkpoint wasn't ok, then it will\nnever happen\neven if it's later put back into read-write mode. Thoughts?\n\nQuick demo:\n----------------\nWe have few active sessions, section 1 has performed some writes and stayed\nin the\nidle state for some time, in between in session 2 where superuser\nsuccessfully changed\nsystem state in read-only via ALTER SYSTEM READ ONLY command which kills\nsession 1. Any other backend who is trying to run write transactions\nthereafter will see\na read-only system error.\n\n------------- SESSION 1 -------------\nsession_1=# BEGIN;\nBEGIN\nsession_1=*# CREATE TABLE foo AS SELECT i FROM generate_series(1,5) i;\nSELECT 5\n\n------------- SESSION 2 -------------\nsession_2=# ALTER SYSTEM READ ONLY;\nALTER SYSTEM\n\n------------- SESSION 1 -------------\nsession_1=*# COMMIT;\nFATAL: system is now read only\nHINT: Cannot continue a transaction if it has performed writes while\nsystem is read only.\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\n------------- SESSION 3 -------------\nsession_3=# CREATE TABLE foo_bar (i int);\nERROR: cannot execute CREATE TABLE in a read-only transaction\n\n------------- SESSION 4 -------------\nsession_4=# CHECKPOINT;\nERROR: system is now read only\n\nSystem can put back to read-write mode by \"ALTER SYSTEM READ WRITE\" :\n\n------------- SESSION 2 -------------\nsession_2=# ALTER SYSTEM READ WRITE;\nALTER SYSTEM\n\n------------- SESSION 3 -------------\nsession_3=# CREATE TABLE foo_bar (i int);\nCREATE TABLE\n\n------------- SESSION 4 -------------\nsession_4=# CHECKPOINT;\nCHECKPOINT\n\n\nTODOs:\n-----------\n1. Documentation.\n\nAttachments summary:\n------------------------------\nI tried to split the changes so that it can be easy to read and see the\nincremental implementation.\n\n0001: Patch by Robert, to add ability support error in global barrier\nabsorption.\n0002: Patch implement ALTER SYSTEM { READ | WRITE} syntax and psql tab\n completion support for it.\n0003: A basic implementation where the system can accept $Subject command\n and change system to read-only by an emitting barrier.\n0004: Patch does the enhancing where the backed execute $Subject command\n only and places a request to the checkpointer which is\nresponsible to change\n the state by the emitting barrier. Also, store the state into the\ncontrol file to\n make It persists across the server restarts.\n0005: Patch tightens the check to prevent error in the critical section.\n0006: Documentation - WIP\n\nCredit:\n-------\nThe feature is one of the part of Andres Frued's high-level design ideas\nfor inbuilt\ngraceful failover for PostgreSQL. Feature implementation design by Robert\nHaas.\nInitial patch by Amit Khandekar further works and improvement by me under\nRobert's\nguidance includes this mail writeup as well.\n\nRef:\n----\n1] Global barrier commit # 16a4e4aecd47da7a6c4e1ebc20f6dd1a13f9133b\n\nThank you !\n\nRegards,\nAmul Sul\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 16 Jun 2020 19:25:40 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 7:26 PM amul sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Attached patch proposes $Subject feature which forces the system into read-only\n> mode where insert write-ahead log will be prohibited until ALTER SYSTEM READ\n> WRITE executed.\n>\n> The high-level goal is to make the availability/scale-out situation better. The feature\n> will help HA setup where the master server needs to stop accepting WAL writes\n> immediately and kick out any transaction expecting WAL writes at the end, in case\n> of network down on master or replication connections failures.\n>\n> For example, this feature allows for a controlled switchover without needing to shut\n> down the master. You can instead make the master read-only, wait until the standby\n> catches up, and then promote the standby. The master remains available for read\n> queries throughout, and also for WAL streaming, but without the possibility of any\n> new write transactions. After switchover is complete, the master can be shut down\n> and brought back up as a standby without needing to use pg_rewind. (Eventually, it\n> would be nice to be able to make the read-only master into a standby without having\n> to restart it, but that is a problem for another patch.)\n>\n> This might also help in failover scenarios. For example, if you detect that the master\n> has lost network connectivity to the standby, you might make it read-only after 30 s,\n> and promote the standby after 60 s, so that you never have two writable masters at\n> the same time. In this case, there's still some split-brain, but it's still better than what\n> we have now.\n>\n> Design:\n> ----------\n> The proposed feature is built atop of super barrier mechanism commit[1] to coordinate\n> global state changes to all active backends. Backends which executed\n> ALTER SYSTEM READ { ONLY | WRITE } command places request to checkpointer\n> process to change the requested WAL read/write state aka WAL prohibited and WAL\n> permitted state respectively. When the checkpointer process sees the WAL prohibit\n> state change request, it emits a global barrier and waits until all backends that\n> participate in the ProcSignal absorbs it. Once it has done the WAL read/write state in\n> share memory and control file will be updated so that XLogInsertAllowed() returns\n> accordingly.\n>\n\nDo we prohibit the checkpointer to write dirty pages and write a\ncheckpoint record as well? If so, will the checkpointer process\nwrites the current dirty pages and writes a checkpoint record or we\nskip that as well?\n\n> If there are open transactions that have acquired an XID, the sessions are killed\n> before the barrier is absorbed.\n>\n\nWhat about prepared transactions?\n\n> They can't commit without writing WAL, and they\n> can't abort without writing WAL, either, so we must at least abort the transaction. We\n> don't necessarily need to kill the session, but it's hard to avoid in all cases because\n> (1) if there are subtransactions active, we need to force the top-level abort record to\n> be written immediately, but we can't really do that while keeping the subtransactions\n> on the transaction stack, and (2) if the session is idle, we also need the top-level abort\n> record to be written immediately, but can't send an error to the client until the next\n> command is issued without losing wire protocol synchronization. For now, we just use\n> FATAL to kill the session; maybe this can be improved in the future.\n>\n> Open transactions that don't have an XID are not killed, but will get an ERROR if they\n> try to acquire an XID later, or if they try to write WAL without acquiring an XID (e.g. VACUUM).\n>\n\nWhat if vacuum is on an unlogged relation? Do we allow writes via\nvacuum to unlogged relation?\n\n> To make that happen, the patch adds a new coding rule: a critical section that will write\n> WAL must be preceded by a call to CheckWALPermitted(), AssertWALPermitted(), or\n> AssertWALPermitted_HaveXID(). The latter variants are used when we know for certain\n> that inserting WAL here must be OK, either because we have an XID (we would have\n> been killed by a change to read-only if one had occurred) or for some other reason.\n>\n> The ALTER SYSTEM READ WRITE command can be used to reverse the effects of\n> ALTER SYSTEM READ ONLY. Both ALTER SYSTEM READ ONLY and ALTER\n> SYSTEM READ WRITE update not only the shared memory state but also the control\n> file, so that changes survive a restart.\n>\n> The transition between read-write and read-only is a pretty major transition, so we emit\n> log message for each successful execution of a ALTER SYSTEM READ {ONLY | WRITE}\n> command. Also, we have added a new GUC system_is_read_only which returns \"on\"\n> when the system is in WAL prohibited state or recovery.\n>\n> Another part of the patch that quite uneasy and need a discussion is that when the\n> shutdown in the read-only state we do skip shutdown checkpoint and at a restart, first\n> startup recovery will be performed and latter the read-only state will be restored to\n> prohibit further WAL write irrespective of recovery checkpoint succeed or not. The\n> concern is here if this startup recovery checkpoint wasn't ok, then it will never happen\n> even if it's later put back into read-write mode.\n>\n\nI am not able to understand this problem. What do you mean by\n\"recovery checkpoint succeed or not\", do you add a try..catch and skip\nany error while performing recovery checkpoint?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jun 2020 18:32:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On 6/16/20 7:25 PM, amul sul wrote:\n> Attached patch proposes $Subject feature which forces the system into \n> read-only\n> mode where insert write-ahead log will be prohibited until ALTER \n> SYSTEM READ\n> WRITE executed.\n\nThanks Amul.\n\n1) ALTER SYSTEM\n\npostgres=# alter system read only;\nALTER SYSTEM\npostgres=# alter system reset all;\nALTER SYSTEM\npostgres=# create table t1(n int);\nERROR: cannot execute CREATE TABLE in a read-only transaction\n\nInitially i thought after firing 'Alter system reset all' , it will be \nback to normal.\n\ncan't we have a syntax like - \"Alter system set read_only='True' ; \"\n\nso that ALTER SYSTEM command syntax should be same for all.\n\npostgres=# \\h alter system\nCommand: ALTER SYSTEM\nDescription: change a server configuration parameter\nSyntax:\nALTER SYSTEM SET configuration_parameter { TO | = } { value | 'value' | \nDEFAULT }\n\nALTER SYSTEM RESET configuration_parameter\nALTER SYSTEM RESET ALL\n\nHow we are going to justify this in help command of ALTER SYSTEM ?\n\n2)When i connected to postgres in a single user mode , i was not able to \nset the system in read only\n\n[edb@tushar-ldap-docker bin]$ ./postgres --single -D data postgres\n\n\nPostgreSQL stand-alone backend 14devel\nbackend> alter system read only;\nERROR: checkpointer is not running\n\nbackend>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Wed, 17 Jun 2020 19:21:18 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Do we prohibit the checkpointer to write dirty pages and write a\n> checkpoint record as well? If so, will the checkpointer process\n> writes the current dirty pages and writes a checkpoint record or we\n> skip that as well?\n\nI think the definition of this feature should be that you can't write\nWAL. So, it's OK to write dirty pages in general, for example to allow\nfor buffer replacement so we can continue to run read-only queries.\nBut there's no reason for the checkpointer to do it: it shouldn't try\nto checkpoint, and therefore it shouldn't write dirty pages either.\n(I'm not sure if this is how the patch currently works; I'm describing\nhow I think it should work.)\n\n> > If there are open transactions that have acquired an XID, the sessions are killed\n> > before the barrier is absorbed.\n>\n> What about prepared transactions?\n\nThey don't matter. The problem with a running transaction that has an\nXID is that somebody might end the session, and then we'd have to\nwrite either a commit record or an abort record. But a prepared\ntransaction doesn't have that problem. You can't COMMIT PREPARED or\nROLLBACK PREPARED while the system is read-only, as I suppose anybody\nwould expect, but their mere existence isn't a problem.\n\n> What if vacuum is on an unlogged relation? Do we allow writes via\n> vacuum to unlogged relation?\n\nInteresting question. I was thinking that we should probably teach the\nautovacuum launcher to stop launching workers while the system is in a\nREAD ONLY state, but what about existing workers? Anything that\ngenerates invalidation messages, acquires an XID, or writes WAL has to\nbe blocked in a read-only state; but I'm not sure to what extent the\nfirst two of those things would be a problem for vacuuming an unlogged\ntable. I think you couldn't truncate it, at least, because that\nacquires an XID.\n\n> > Another part of the patch that quite uneasy and need a discussion is that when the\n> > shutdown in the read-only state we do skip shutdown checkpoint and at a restart, first\n> > startup recovery will be performed and latter the read-only state will be restored to\n> > prohibit further WAL write irrespective of recovery checkpoint succeed or not. The\n> > concern is here if this startup recovery checkpoint wasn't ok, then it will never happen\n> > even if it's later put back into read-write mode.\n>\n> I am not able to understand this problem. What do you mean by\n> \"recovery checkpoint succeed or not\", do you add a try..catch and skip\n> any error while performing recovery checkpoint?\n\nWhat I think should happen is that the end-of-recovery checkpoint\nshould be skipped, and then if the system is put back into read-write\nmode later we should do it then. But I think right now the patch\nperforms the end-of-recovery checkpoint before restoring the read-only\nstate, which seems 100% wrong to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:41:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 9:51 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> 1) ALTER SYSTEM\n>\n> postgres=# alter system read only;\n> ALTER SYSTEM\n> postgres=# alter system reset all;\n> ALTER SYSTEM\n> postgres=# create table t1(n int);\n> ERROR: cannot execute CREATE TABLE in a read-only transaction\n>\n> Initially i thought after firing 'Alter system reset all' , it will be\n> back to normal.\n>\n> can't we have a syntax like - \"Alter system set read_only='True' ; \"\n\nNo, this needs to be separate from the GUC-modification syntax, I\nthink. It's a different kind of state change. It doesn't, and can't,\njust edit postgresql.auto.conf.\n\n> 2)When i connected to postgres in a single user mode , i was not able to\n> set the system in read only\n>\n> [edb@tushar-ldap-docker bin]$ ./postgres --single -D data postgres\n>\n> PostgreSQL stand-alone backend 14devel\n> backend> alter system read only;\n> ERROR: checkpointer is not running\n>\n> backend>\n\nHmm, that's an interesting finding. I wonder what happens if you make\nthe system read only, shut it down, and then restart it in single-user\nmode. Given what you see here, I bet you can't put it back into a\nread-write state from single user mode either, which seems like a\nproblem. Either single-user mode should allow changing between R/O and\nR/W, or alternatively single-user mode should ignore ALTER SYSTEM READ\nONLY and always allow writes anyway.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:45:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Tue, Jun 16, 2020 at 7:26 PM amul sul <sulamul@gmail.com> wrote:\n>> Attached patch proposes $Subject feature which forces the system into read-only\n>> mode where insert write-ahead log will be prohibited until ALTER SYSTEM READ\n>> WRITE executed.\n\n> Do we prohibit the checkpointer to write dirty pages and write a\n> checkpoint record as well?\n\nI think this is a really bad idea and should simply be rejected.\n\nAside from the points you mention, such a switch would break autovacuum.\nIt would break the ability for scans to do HOT-chain cleanup, which would\nlikely lead to some odd behaviors (if, eg, somebody flips the switch\nbetween where that's supposed to happen and where an update needs to\nhappen on the same page). It would break the ability for indexscans to do\nkilled-tuple marking, which is critical for performance in some scenarios.\nIt would break the ability to set tuple hint bits, which is even more\ncritical for performance. It'd possibly break, or at least complicate,\nlogic in index AMs to deal with index format updates --- I'm fairly sure\nthere are places that will try to update out-of-date data structures\nrather than cope with the old structure, even in nominally read-only\nsearches.\n\nI also think that putting such a thing into ALTER SYSTEM has got big\nlogical problems. Someday we will probably want to have ALTER SYSTEM\nwrite WAL so that standby servers can absorb the settings changes.\nBut if writing WAL is disabled, how can you ever turn the thing off again?\n\nLastly, the arguments in favor seem pretty bogus. HA switchover normally\ninvolves just killing the primary server, not expecting that you can\nleisurely issue some commands to it first. Commands that involve a whole\nbunch of subtle interlocking --- and, therefore, aren't going to work if\nanything has gone wrong already anywhere in the server --- seem like a\nparticularly poor thing to be hanging your HA strategy on. I also wonder\nwhat this accomplishes that couldn't be done much more simply by killing\nthe walsenders.\n\nIn short, I see a huge amount of complexity here, an ongoing source of\nhard-to-identify, hard-to-fix bugs, and not very much real usefulness.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:58:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 10:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Aside from the points you mention, such a switch would break autovacuum.\n> It would break the ability for scans to do HOT-chain cleanup, which would\n> likely lead to some odd behaviors (if, eg, somebody flips the switch\n> between where that's supposed to happen and where an update needs to\n> happen on the same page). It would break the ability for indexscans to do\n> killed-tuple marking, which is critical for performance in some scenarios.\n> It would break the ability to set tuple hint bits, which is even more\n> critical for performance. It'd possibly break, or at least complicate,\n> logic in index AMs to deal with index format updates --- I'm fairly sure\n> there are places that will try to update out-of-date data structures\n> rather than cope with the old structure, even in nominally read-only\n> searches.\n\nThis seems like pretty dubious hand-waving. Of course, things that\nwrite WAL are going to be broken by a switch that prevents writing\nWAL; but if they were not, there would be no purpose in having such a\nswitch, so that's not really an argument. But you seem to have mixed\nin some things that don't require writing WAL, and claimed without\nevidence that those would somehow also be broken. I don't think that's\nthe case, but even if it were, so what? We live with all of these\nrestrictions on standbys anyway.\n\n> I also think that putting such a thing into ALTER SYSTEM has got big\n> logical problems. Someday we will probably want to have ALTER SYSTEM\n> write WAL so that standby servers can absorb the settings changes.\n> But if writing WAL is disabled, how can you ever turn the thing off again?\n\nI mean, the syntax that we use for a feature like this is arbitrary. I\npicked this one, so I like it, but it can easily be changed if other\npeople want something else. The rest of this argument doesn't seem to\nme to make very much sense. The existing ALTER SYSTEM functionality to\nmodify a text configuration file isn't replicated today and I'm not\nsure why we should make it so, considering that replication generally\nonly considers things that are guaranteed to be the same on the master\nand the standby, which this is not. But even if we did, that has\nnothing to do with whether some functionality that changes the system\nstate without changing a text file ought to also be replicated. This\nis a piece of cluster management functionality and it makes no sense\nto replicate it. And no right-thinking person would ever propose to\nchange a feature that renders the system read-only in such a way that\nit was impossible to deactivate it. That would be nuts.\n\n> Lastly, the arguments in favor seem pretty bogus. HA switchover normally\n> involves just killing the primary server, not expecting that you can\n> leisurely issue some commands to it first.\n\nYeah, that's exactly the problem I want to fix. If you kill the master\nserver, then you have interrupted service, even for read-only queries.\nThat sucks. Also, even if you don't care about interrupting service on\nthe master, it's actually sorta hard to guarantee a clean switchover.\nThe walsenders are supposed to send all the WAL from the master before\nexiting, but if the connection is broken for some reason, then the\nmaster is down and the standbys can't stream the rest of the WAL. You\ncan start it up again, but then you might generate more WAL. You can\ntry to copy the WAL around manually from one pg_wal directory to\nanother, but that's not a very nice thing for users to need to do\nmanually, and seems buggy and error-prone.\n\nAnd how do you figure out where the WAL ends on the master and make\nsure that the standby replayed it all? If the master is up, it's easy:\nyou just use the same queries you use all the time. If the master is\ndown, you have to use some different technique that involves manually\nexamining files or scrutinizing pg_controldata output. It's actually\nvery difficult to get this right.\n\n> Commands that involve a whole\n> bunch of subtle interlocking --- and, therefore, aren't going to work if\n> anything has gone wrong already anywhere in the server --- seem like a\n> particularly poor thing to be hanging your HA strategy on.\n\nIt's important not to conflate controlled switchover with failover.\nWhen there's a failover, you have to accept some risk of data loss or\nservice interruption; but a controlled switchover does not need to\ncarry the same risks and there are plenty of systems out there where\nit doesn't.\n\n> I also wonder\n> what this accomplishes that couldn't be done much more simply by killing\n> the walsenders.\n\nKilling the walsenders does nothing ... the clients immediately reconnect.\n\n> In short, I see a huge amount of complexity here, an ongoing source of\n> hard-to-identify, hard-to-fix bugs, and not very much real usefulness.\n\nI do think this is complex and the risk of bugs that are hard to\nidentify or hard to fix certainly needs to be considered. I\nstrenuously disagree with the idea that there is not very much real\nusefulness. Getting failover set up in a way that actually works\nrobustly is, in my experience, one of the two or three most serious\nchallenges my employer's customers face today. The core server support\nwe provide for that is breathtakingly primitive, and it's urgent that\nwe do better. Cloud providers are moving users from PostgreSQL to\ntheir own forks of PostgreSQL in vast numbers in large part because\nusers don't want to deal with this crap, and the cloud providers have\nmade it so they don't have to. People running PostgreSQL themselves\nneed complex third-party tools and even then the experience isn't as\ngood as what a major cloud provider would offer. This patch is not\ngoing to fix that, but I think it's a step in the right direction, and\nI hope others will agree.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:07:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> This seems like pretty dubious hand-waving. Of course, things that\n> write WAL are going to be broken by a switch that prevents writing\n> WAL; but if they were not, there would be no purpose in having such a\n> switch, so that's not really an argument. But you seem to have mixed\n> in some things that don't require writing WAL, and claimed without\n> evidence that those would somehow also be broken.\n\nWhich of the things I mentioned don't require writing WAL?\n\nYou're right that these are the same things that we already forbid on a\nstandby, for the same reason, so maybe it won't be as hard to identify\nthem as I feared. I wonder whether we should envision this as \"demote\nprimary to standby\" rather than an independent feature.\n\n>> I also think that putting such a thing into ALTER SYSTEM has got big\n>> logical problems.\n\n> ... no right-thinking person would ever propose to\n> change a feature that renders the system read-only in such a way that\n> it was impossible to deactivate it. That would be nuts.\n\nMy point was that putting this in ALTER SYSTEM paints us into a corner\nas to what we can do with ALTER SYSTEM in the future: we won't ever be\nable to make that do anything that would require writing WAL. And I\ndon't entirely believe your argument that that will never be something\nwe'd want to do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:27:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Which of the things I mentioned don't require writing WAL?\n\nWriting hint bits and marking index tuples as killed do not write WAL\nunless checksums are enabled.\n\n> You're right that these are the same things that we already forbid on a\n> standby, for the same reason, so maybe it won't be as hard to identify\n> them as I feared. I wonder whether we should envision this as \"demote\n> primary to standby\" rather than an independent feature.\n\nSee my comments on the nearby pg_demote thread. I think we want both.\n\n> >> I also think that putting such a thing into ALTER SYSTEM has got big\n> >> logical problems.\n>\n> > ... no right-thinking person would ever propose to\n> > change a feature that renders the system read-only in such a way that\n> > it was impossible to deactivate it. That would be nuts.\n>\n> My point was that putting this in ALTER SYSTEM paints us into a corner\n> as to what we can do with ALTER SYSTEM in the future: we won't ever be\n> able to make that do anything that would require writing WAL. And I\n> don't entirely believe your argument that that will never be something\n> we'd want to do.\n\nI think that depends a lot on how you view ALTER SYSTEM. I believe it\nwould be reasonable to view ALTER SYSTEM as a catch-all for commands\nthat make system-wide state changes, even if those changes are not all\nof the same kind as each other; some might be machine-local, and\nothers cluster-wide; some WAL-logged, and others not. I don't think\nit's smart to view ALTER SYSTEM through a lens that boxes it into only\nediting postgresql.auto.conf; if that were so, we ought to have called\nit ALTER CONFIGURATION FILE or something rather than ALTER SYSTEM. For\nthat reason, I do not see the choice of syntax as painting us into a\ncorner.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:34:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 17, 2020 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Which of the things I mentioned don't require writing WAL?\n\n> Writing hint bits and marking index tuples as killed do not write WAL\n> unless checksums are enabled.\n\nAnd your point is? I thought enabling checksums was considered\ngood practice these days.\n\n>> You're right that these are the same things that we already forbid on a\n>> standby, for the same reason, so maybe it won't be as hard to identify\n>> them as I feared. I wonder whether we should envision this as \"demote\n>> primary to standby\" rather than an independent feature.\n\n> See my comments on the nearby pg_demote thread. I think we want both.\n\nWell, if pg_demote can be done for X amount of effort, and largely\ngets the job done, while this requires 10X or 100X the effort and\nintroduces 10X or 100X as many bugs, I'm not especially convinced\nthat we want both. \n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:45:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 12:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Writing hint bits and marking index tuples as killed do not write WAL\n> > unless checksums are enabled.\n>\n> And your point is? I thought enabling checksums was considered\n> good practice these days.\n\nI don't want to have an argument about what typical or best practices\nare; I wasn't trying to make any point about that one way or the\nother. I'm just saying that the operations you listed don't\nnecessarily all write WAL. In an event, even if they did, the larger\npoint is that standbys work like that, too, so it's not unprecedented\nor illogical to think of such things.\n\n> >> You're right that these are the same things that we already forbid on a\n> >> standby, for the same reason, so maybe it won't be as hard to identify\n> >> them as I feared. I wonder whether we should envision this as \"demote\n> >> primary to standby\" rather than an independent feature.\n>\n> > See my comments on the nearby pg_demote thread. I think we want both.\n>\n> Well, if pg_demote can be done for X amount of effort, and largely\n> gets the job done, while this requires 10X or 100X the effort and\n> introduces 10X or 100X as many bugs, I'm not especially convinced\n> that we want both.\n\nSure: if two features duplicate each other, and one of them is way\nmore work and way more buggy, then it's silly to have both, and we\nshould just accept the easy, bug-free one. However, as I said in the\nother email to which I referred you, I currently believe that these\ntwo features actually don't duplicate each other and that using them\nboth together would be quite beneficial. Also, even if they did, I\ndon't know where you are getting the idea that this feature will be\n10X or 100X more work and more buggy than the other one. I have looked\nat this code prior to it being posted, but I haven't looked at the\nother code at all; I am guessing that you have looked at neither. I\nwould be happy if you did, because it is often the case that\narchitectural issues that escape other people are apparent to you upon\nexamination, and it's always nice to know about those earlier rather\nthan later so that one can decide to (a) give up or (b) fix them. But\nI see no point in speculating in the abstract that such issues may\nexist and that they may be more severe in one case than the other. My\nown guess is that, properly implemented, they are within 2-3X of each\nin one direction or the other, not 10-100X. It is almost unbelievable\nto me that the pg_demote patch could be 100X simpler than this one; if\nit were, I'd be practically certain it was a 5-minute hack job\nunworthy of any serious consideration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 13:26:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-17 12:07:22 -0400, Robert Haas wrote:\n> On Wed, Jun 17, 2020 at 10:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I also think that putting such a thing into ALTER SYSTEM has got big\n> > logical problems. Someday we will probably want to have ALTER SYSTEM\n> > write WAL so that standby servers can absorb the settings changes.\n> > But if writing WAL is disabled, how can you ever turn the thing off again?\n> \n> I mean, the syntax that we use for a feature like this is arbitrary. I\n> picked this one, so I like it, but it can easily be changed if other\n> people want something else. The rest of this argument doesn't seem to\n> me to make very much sense. The existing ALTER SYSTEM functionality to\n> modify a text configuration file isn't replicated today and I'm not\n> sure why we should make it so, considering that replication generally\n> only considers things that are guaranteed to be the same on the master\n> and the standby, which this is not. But even if we did, that has\n> nothing to do with whether some functionality that changes the system\n> state without changing a text file ought to also be replicated. This\n> is a piece of cluster management functionality and it makes no sense\n> to replicate it. And no right-thinking person would ever propose to\n> change a feature that renders the system read-only in such a way that\n> it was impossible to deactivate it. That would be nuts.\n\nI agree that the concrete syntax here doesn't seem to matter much. If\nthis worked by actually putting a GUC into the config file, it would\nperhaps matter a bit more, but it doesn't afaict. It seems good to\navoid new top-level statements, and ALTER SYSTEM seems to fit well.\n\n\nI wonder if there's an argument about wanting to be able to execute this\ncommand over a physical replication connection? I think this feature\nfairly obviously is a building block for \"gracefully failover to this\nstandby\", and it seems like it'd be nicer if that didn't potentially\nrequire two pg_hba.conf entries for the to-be-promoted primary on the\ncurrent/old primary?\n\n\n> > Lastly, the arguments in favor seem pretty bogus. HA switchover normally\n> > involves just killing the primary server, not expecting that you can\n> > leisurely issue some commands to it first.\n> \n> Yeah, that's exactly the problem I want to fix. If you kill the master\n> server, then you have interrupted service, even for read-only queries.\n> That sucks. Also, even if you don't care about interrupting service on\n> the master, it's actually sorta hard to guarantee a clean switchover.\n> The walsenders are supposed to send all the WAL from the master before\n> exiting, but if the connection is broken for some reason, then the\n> master is down and the standbys can't stream the rest of the WAL. You\n> can start it up again, but then you might generate more WAL. You can\n> try to copy the WAL around manually from one pg_wal directory to\n> another, but that's not a very nice thing for users to need to do\n> manually, and seems buggy and error-prone.\n\nAlso (I'm sure you're aware) if you just non-gracefully shut down the\nold primary, you're going to have to rewind the old primary to be able\nto use it as a standby. And if you non-gracefully stop you're gonna\nincur checkpoint overhead, which is *massive* on non-toy\ndatabases. There's a huge practical difference between a minor version\nupgrade causing 10s of unavailability and causing 5min-30min.\n\n\n> And how do you figure out where the WAL ends on the master and make\n> sure that the standby replayed it all? If the master is up, it's easy:\n> you just use the same queries you use all the time. If the master is\n> down, you have to use some different technique that involves manually\n> examining files or scrutinizing pg_controldata output. It's actually\n> very difficult to get this right.\n\nYea, it's absurdly hard. I think it's really kind of ridiculous that we\nexpect others to get this right if we, the developers of this stuff,\ncan't really get it right because it's so complicated. Which imo makes\nthis:\n\n> > Commands that involve a whole\n> > bunch of subtle interlocking --- and, therefore, aren't going to work if\n> > anything has gone wrong already anywhere in the server --- seem like a\n> > particularly poor thing to be hanging your HA strategy on.\n\nmore of an argument for having this type of stuff builtin.\n\n\n> It's important not to conflate controlled switchover with failover.\n> When there's a failover, you have to accept some risk of data loss or\n> service interruption; but a controlled switchover does not need to\n> carry the same risks and there are plenty of systems out there where\n> it doesn't.\n\nYup.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jun 2020 11:05:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 8:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 17, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Do we prohibit the checkpointer to write dirty pages and write a\n> > checkpoint record as well? If so, will the checkpointer process\n> > writes the current dirty pages and writes a checkpoint record or we\n> > skip that as well?\n>\n> I think the definition of this feature should be that you can't write\n> WAL. So, it's OK to write dirty pages in general, for example to allow\n> for buffer replacement so we can continue to run read-only queries.\n> But there's no reason for the checkpointer to do it: it shouldn't try\n> to checkpoint, and therefore it shouldn't write dirty pages either.\n> (I'm not sure if this is how the patch currently works; I'm describing\n> how I think it should work.)\n>\nYou are correct -- writing dirty pages is not restricted.\n\n> > > If there are open transactions that have acquired an XID, the sessions are killed\n> > > before the barrier is absorbed.\n> >\n> > What about prepared transactions?\n>\n> They don't matter. The problem with a running transaction that has an\n> XID is that somebody might end the session, and then we'd have to\n> write either a commit record or an abort record. But a prepared\n> transaction doesn't have that problem. You can't COMMIT PREPARED or\n> ROLLBACK PREPARED while the system is read-only, as I suppose anybody\n> would expect, but their mere existence isn't a problem.\n>\n> > What if vacuum is on an unlogged relation? Do we allow writes via\n> > vacuum to unlogged relation?\n>\n> Interesting question. I was thinking that we should probably teach the\n> autovacuum launcher to stop launching workers while the system is in a\n> READ ONLY state, but what about existing workers? Anything that\n> generates invalidation messages, acquires an XID, or writes WAL has to\n> be blocked in a read-only state; but I'm not sure to what extent the\n> first two of those things would be a problem for vacuuming an unlogged\n> table. I think you couldn't truncate it, at least, because that\n> acquires an XID.\n>\n> > > Another part of the patch that quite uneasy and need a discussion is that when the\n> > > shutdown in the read-only state we do skip shutdown checkpoint and at a restart, first\n> > > startup recovery will be performed and latter the read-only state will be restored to\n> > > prohibit further WAL write irrespective of recovery checkpoint succeed or not. The\n> > > concern is here if this startup recovery checkpoint wasn't ok, then it will never happen\n> > > even if it's later put back into read-write mode.\n> >\n> > I am not able to understand this problem. What do you mean by\n> > \"recovery checkpoint succeed or not\", do you add a try..catch and skip\n> > any error while performing recovery checkpoint?\n>\n> What I think should happen is that the end-of-recovery checkpoint\n> should be skipped, and then if the system is put back into read-write\n> mode later we should do it then. But I think right now the patch\n> performs the end-of-recovery checkpoint before restoring the read-only\n> state, which seems 100% wrong to me.\n>\nYeah, we need more thought on how to proceed further. I am kind of agree that\nthe current behavior is not right with Robert since writing end-of-recovery\ncheckpoint violates the no-wal-write rule.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 18 Jun 2020 09:38:56 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 8:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 17, 2020 at 9:51 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> > 1) ALTER SYSTEM\n> >\n> > postgres=# alter system read only;\n> > ALTER SYSTEM\n> > postgres=# alter system reset all;\n> > ALTER SYSTEM\n> > postgres=# create table t1(n int);\n> > ERROR: cannot execute CREATE TABLE in a read-only transaction\n> >\n> > Initially i thought after firing 'Alter system reset all' , it will be\n> > back to normal.\n> >\n> > can't we have a syntax like - \"Alter system set read_only='True' ; \"\n>\n> No, this needs to be separate from the GUC-modification syntax, I\n> think. It's a different kind of state change. It doesn't, and can't,\n> just edit postgresql.auto.conf.\n>\n> > 2)When i connected to postgres in a single user mode , i was not able to\n> > set the system in read only\n> >\n> > [edb@tushar-ldap-docker bin]$ ./postgres --single -D data postgres\n> >\n> > PostgreSQL stand-alone backend 14devel\n> > backend> alter system read only;\n> > ERROR: checkpointer is not running\n> >\n> > backend>\n>\n> Hmm, that's an interesting finding. I wonder what happens if you make\n> the system read only, shut it down, and then restart it in single-user\n> mode. Given what you see here, I bet you can't put it back into a\n> read-write state from single user mode either, which seems like a\n> problem. Either single-user mode should allow changing between R/O and\n> R/W, or alternatively single-user mode should ignore ALTER SYSTEM READ\n> ONLY and always allow writes anyway.\n>\nOk, will try to enable changing between R/O and R/W in the next version.\n\nThanks Tushar for the testing.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 18 Jun 2020 09:49:45 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 8:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 17, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Do we prohibit the checkpointer to write dirty pages and write a\n> > checkpoint record as well? If so, will the checkpointer process\n> > writes the current dirty pages and writes a checkpoint record or we\n> > skip that as well?\n>\n> I think the definition of this feature should be that you can't write\n> WAL. So, it's OK to write dirty pages in general, for example to allow\n> for buffer replacement so we can continue to run read-only queries.\n>\n\nFor buffer replacement, many-a-times we have to also perform\nXLogFlush, what do we do for that? We can't proceed without doing\nthat and erroring out from there means stopping read-only query from\nthe user perspective.\n\n> But there's no reason for the checkpointer to do it: it shouldn't try\n> to checkpoint, and therefore it shouldn't write dirty pages either.\n>\n\nWhat is the harm in doing the checkpoint before we put the system into\nREAD ONLY state? The advantage is that we can at least reduce the\nrecovery time if we allow writing checkpoint record.\n\n>\n> > What if vacuum is on an unlogged relation? Do we allow writes via\n> > vacuum to unlogged relation?\n>\n> Interesting question. I was thinking that we should probably teach the\n> autovacuum launcher to stop launching workers while the system is in a\n> READ ONLY state, but what about existing workers? Anything that\n> generates invalidation messages, acquires an XID, or writes WAL has to\n> be blocked in a read-only state; but I'm not sure to what extent the\n> first two of those things would be a problem for vacuuming an unlogged\n> table. I think you couldn't truncate it, at least, because that\n> acquires an XID.\n>\n\nIf the truncate operation errors out, then won't the system will again\ntrigger a new autovacuum worker for the same relation as we update\nstats at the end? Also, in general for regular tables, if there is an\nerror while it tries to WAL, it could again trigger the autovacuum\nworker for the same relation. If this is true then unnecessarily it\nwill generate a lot of dirty pages and don't think it will be good for\nthe system to behave that way?\n\n> > > Another part of the patch that quite uneasy and need a discussion is that when the\n> > > shutdown in the read-only state we do skip shutdown checkpoint and at a restart, first\n> > > startup recovery will be performed and latter the read-only state will be restored to\n> > > prohibit further WAL write irrespective of recovery checkpoint succeed or not. The\n> > > concern is here if this startup recovery checkpoint wasn't ok, then it will never happen\n> > > even if it's later put back into read-write mode.\n> >\n> > I am not able to understand this problem. What do you mean by\n> > \"recovery checkpoint succeed or not\", do you add a try..catch and skip\n> > any error while performing recovery checkpoint?\n>\n> What I think should happen is that the end-of-recovery checkpoint\n> should be skipped, and then if the system is put back into read-write\n> mode later we should do it then.\n>\n\nBut then if we have to perform recovery again, it will start from the\nprevious checkpoint. I think we have to live with it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jun 2020 15:25:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, 17 Jun 2020 12:07:22 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n[...]\n\n> > Commands that involve a whole\n> > bunch of subtle interlocking --- and, therefore, aren't going to work if\n> > anything has gone wrong already anywhere in the server --- seem like a\n> > particularly poor thing to be hanging your HA strategy on. \n> \n> It's important not to conflate controlled switchover with failover.\n> When there's a failover, you have to accept some risk of data loss or\n> service interruption; but a controlled switchover does not need to\n> carry the same risks and there are plenty of systems out there where\n> it doesn't.\n\nYes. Maybe we should make sure the wording we are using is the same for\neveryone. I already hear/read \"failover\", \"controlled failover\", \"switchover\" or\n\"controlled switchover\", this is confusing. My definition of switchover is:\n\n swapping primary and secondary status between two replicating instances. With\n no data loss. This is a controlled procedure where all steps must succeed to\n complete.\n If a step fails, the procedure fail back to the original primary with no data\n loss.\n\nHowever, Wikipedia has a broader definition, including situations where the\nswitchover is executed upon a failure: https://en.wikipedia.org/wiki/Switchover\n\nRegards,\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:35:03 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, 16 Jun 2020 at 14:56, amul sul <sulamul@gmail.com> wrote:\n\n\n> The high-level goal is to make the availability/scale-out situation\n> better. The feature\n> will help HA setup where the master server needs to stop accepting WAL\n> writes\n> immediately and kick out any transaction expecting WAL writes at the end,\n> in case\n> of network down on master or replication connections failures.\n>\n> For example, this feature allows for a controlled switchover without\n> needing to shut\n> down the master. You can instead make the master read-only, wait until the\n> standby\n> catches up, and then promote the standby. The master remains available for\n> read\n> queries throughout, and also for WAL streaming, but without the\n> possibility of any\n> new write transactions. After switchover is complete, the master can be\n> shut down\n> and brought back up as a standby without needing to use pg_rewind.\n> (Eventually, it\n> would be nice to be able to make the read-only master into a standby\n> without having\n> to restart it, but that is a problem for another patch.)\n>\n> This might also help in failover scenarios. For example, if you detect\n> that the master\n> has lost network connectivity to the standby, you might make it read-only\n> after 30 s,\n> and promote the standby after 60 s, so that you never have two writable\n> masters at\n> the same time. In this case, there's still some split-brain, but it's\n> still better than what\n> we have now.\n>\n\n\n> If there are open transactions that have acquired an XID, the sessions are\n> killed\n> before the barrier is absorbed.\n>\n\n\n> inbuilt graceful failover for PostgreSQL\n>\n\nThat doesn't appear to be very graceful. Perhaps objections could be\nassuaged by having a smoother transition and perhaps not even a full\nbarrier, initially.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Tue, 16 Jun 2020 at 14:56, amul sul <sulamul@gmail.com> wrote: The high-level goal is to make the availability/scale-out situation better. The featurewill help HA setup where the master server needs to stop accepting WAL writesimmediately and kick out any transaction expecting WAL writes at the end, in caseof network down on master or replication connections failures.For example, this feature allows for a controlled switchover without needing to shutdown the master. You can instead make the master read-only, wait until the standbycatches up, and then promote the standby. The master remains available for readqueries throughout, and also for WAL streaming, but without the possibility of anynew write transactions. After switchover is complete, the master can be shut downand brought back up as a standby without needing to use pg_rewind. (Eventually, itwould be nice to be able to make the read-only master into a standby without havingto restart it, but that is a problem for another patch.)This might also help in failover scenarios. For example, if you detect that the masterhas lost network connectivity to the standby, you might make it read-only after 30 s,and promote the standby after 60 s, so that you never have two writable masters atthe same time. In this case, there's still some split-brain, but it's still better than whatwe have now. If there are open transactions that have acquired an XID, the sessions are killedbefore the barrier is absorbed. inbuilt graceful failover for PostgreSQLThat doesn't appear to be very graceful. Perhaps objections could be assuaged by having a smoother transition and perhaps not even a full barrier, initially.-- Simon Riggs http://www.2ndQuadrant.com/Mission Critical Databases",
"msg_date": "Thu, 18 Jun 2020 11:39:33 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 9:37 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 17, 2020 at 10:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Lastly, the arguments in favor seem pretty bogus. HA switchover normally\n> > involves just killing the primary server, not expecting that you can\n> > leisurely issue some commands to it first.\n>\n> Yeah, that's exactly the problem I want to fix. If you kill the master\n> server, then you have interrupted service, even for read-only queries.\n>\n\nYeah, but if there is a synchronuos_standby (standby that provide sync\nreplication), user can always route the connections to it\n(automatically if there is some middleware which can detect and route\nthe connection to standby)\n\n> That sucks. Also, even if you don't care about interrupting service on\n> the master, it's actually sorta hard to guarantee a clean switchover.\n>\n\nFair enough. However, it is not described in the initial email\n(unless I have missed it; there is a mention that this patch is one\npart of that bigger feature but no further explanation of that bigger\nfeature) how this feature will allow a clean switchover. I think\nbefore we put the system into READ ONLY state, there could be some WAL\nwhich we haven't sent to standby, what we do we do for that.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jun 2020 16:26:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 17, 2020 at 8:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Jun 17, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Do we prohibit the checkpointer to write dirty pages and write a\n> > > checkpoint record as well? If so, will the checkpointer process\n> > > writes the current dirty pages and writes a checkpoint record or we\n> > > skip that as well?\n> >\n> > I think the definition of this feature should be that you can't write\n> > WAL. So, it's OK to write dirty pages in general, for example to allow\n> > for buffer replacement so we can continue to run read-only queries.\n> >\n>\n> For buffer replacement, many-a-times we have to also perform\n> XLogFlush, what do we do for that? We can't proceed without doing\n> that and erroring out from there means stopping read-only query from\n> the user perspective.\n>\nRead-only does not restrict XLogFlush().\n\n> > But there's no reason for the checkpointer to do it: it shouldn't try\n> > to checkpoint, and therefore it shouldn't write dirty pages either.\n> >\n>\n> What is the harm in doing the checkpoint before we put the system into\n> READ ONLY state? The advantage is that we can at least reduce the\n> recovery time if we allow writing checkpoint record.\n>\nThe checkpoint could take longer, intending to quickly switch to the read-only\nstate.\n\n> >\n> > > What if vacuum is on an unlogged relation? Do we allow writes via\n> > > vacuum to unlogged relation?\n> >\n> > Interesting question. I was thinking that we should probably teach the\n> > autovacuum launcher to stop launching workers while the system is in a\n> > READ ONLY state, but what about existing workers? Anything that\n> > generates invalidation messages, acquires an XID, or writes WAL has to\n> > be blocked in a read-only state; but I'm not sure to what extent the\n> > first two of those things would be a problem for vacuuming an unlogged\n> > table. I think you couldn't truncate it, at least, because that\n> > acquires an XID.\n> >\n>\n> If the truncate operation errors out, then won't the system will again\n> trigger a new autovacuum worker for the same relation as we update\n> stats at the end? Also, in general for regular tables, if there is an\n> error while it tries to WAL, it could again trigger the autovacuum\n> worker for the same relation. If this is true then unnecessarily it\n> will generate a lot of dirty pages and don't think it will be good for\n> the system to behave that way?\n>\nNo new autovacuum worker will be forked in the read-only state and existing will\nhave an error if they try to write WAL after barrier absorption.\n\n> > > > Another part of the patch that quite uneasy and need a discussion is that when the\n> > > > shutdown in the read-only state we do skip shutdown checkpoint and at a restart, first\n> > > > startup recovery will be performed and latter the read-only state will be restored to\n> > > > prohibit further WAL write irrespective of recovery checkpoint succeed or not. The\n> > > > concern is here if this startup recovery checkpoint wasn't ok, then it will never happen\n> > > > even if it's later put back into read-write mode.\n> > >\n> > > I am not able to understand this problem. What do you mean by\n> > > \"recovery checkpoint succeed or not\", do you add a try..catch and skip\n> > > any error while performing recovery checkpoint?\n> >\n> > What I think should happen is that the end-of-recovery checkpoint\n> > should be skipped, and then if the system is put back into read-write\n> > mode later we should do it then.\n> >\n>\n> But then if we have to perform recovery again, it will start from the\n> previous checkpoint. I think we have to live with it.\n>\nLet me explain the case, if we do skip the end-of-recovery checkpoint while\nstarting the system in read-only mode and then later changing the state to\nread-write and do a few write operations and online checkpoints, that will be\nfine? I am yet to explore those things.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 18 Jun 2020 16:48:51 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 5:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> For buffer replacement, many-a-times we have to also perform\n> XLogFlush, what do we do for that? We can't proceed without doing\n> that and erroring out from there means stopping read-only query from\n> the user perspective.\n\nI think we should stop WAL writes, then XLogFlush() once, then declare\nthe system R/O. After that there might be more XLogFlush() calls but\nthere won't be any new WAL, so they won't do anything.\n\n> > But there's no reason for the checkpointer to do it: it shouldn't try\n> > to checkpoint, and therefore it shouldn't write dirty pages either.\n>\n> What is the harm in doing the checkpoint before we put the system into\n> READ ONLY state? The advantage is that we can at least reduce the\n> recovery time if we allow writing checkpoint record.\n\nWell, as Andres says in\nhttp://postgr.es/m/20200617180546.yucxtiupvxghxss6@alap3.anarazel.de\nit can take a really long time.\n\n> > Interesting question. I was thinking that we should probably teach the\n> > autovacuum launcher to stop launching workers while the system is in a\n> > READ ONLY state, but what about existing workers? Anything that\n> > generates invalidation messages, acquires an XID, or writes WAL has to\n> > be blocked in a read-only state; but I'm not sure to what extent the\n> > first two of those things would be a problem for vacuuming an unlogged\n> > table. I think you couldn't truncate it, at least, because that\n> > acquires an XID.\n> >\n>\n> If the truncate operation errors out, then won't the system will again\n> trigger a new autovacuum worker for the same relation as we update\n> stats at the end?\n\nNot if we do what I said in that paragraph. If we're not launching new\nworkers we can't again trigger a worker for the same relation.\n\n> Also, in general for regular tables, if there is an\n> error while it tries to WAL, it could again trigger the autovacuum\n> worker for the same relation. If this is true then unnecessarily it\n> will generate a lot of dirty pages and don't think it will be good for\n> the system to behave that way?\n\nI don't see how this would happen. VACUUM can't really dirty pages\nwithout writing WAL, can it? And, anyway, if there's an error, we're\nnot going to try again for the same relation unless we launch new\nworkers.\n\n> > What I think should happen is that the end-of-recovery checkpoint\n> > should be skipped, and then if the system is put back into read-write\n> > mode later we should do it then.\n>\n> But then if we have to perform recovery again, it will start from the\n> previous checkpoint. I think we have to live with it.\n\nYeah. I don't think it's that bad. The case where you shut down the\nsystem while it's read-only should be a somewhat unusual one. Normally\nyou would mark it read only and then promote a standby and shut the\nold master down (or demote it). But what you want is that if it does\nhappen to go down for some reason before all the WAL is streamed, you\ncan bring it back up and finish streaming the WAL without generating\nany new WAL.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 10:52:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 7:19 AM amul sul <sulamul@gmail.com> wrote:\n> Let me explain the case, if we do skip the end-of-recovery checkpoint while\n> starting the system in read-only mode and then later changing the state to\n> read-write and do a few write operations and online checkpoints, that will be\n> fine? I am yet to explore those things.\n\nI think we'd want the FIRST write operation to be the end-of-recovery\ncheckpoint, before the system is fully read-write. And then after that\ncompletes you could do other things.\n\nIt would be good if we can get an opinion from Andres about this,\nsince I think he has thought about this stuff quite a bit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 10:54:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 6:39 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> That doesn't appear to be very graceful. Perhaps objections could be assuaged by having a smoother transition and perhaps not even a full barrier, initially.\n\nYeah, it's not ideal, though still better than what we have now. What\ndo you mean by \"a smoother transition and perhaps not even a full\nbarrier\"? I think if you want to switch the primary to another machine\nand make the old primary into a standby, you really need to arrest WAL\nwrites completely. It would be better to make existing write\ntransactions ERROR rather than FATAL, but there are some very\ndifficult cases there, so I would like to leave that as a possible\nlater improvement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 10:56:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, 18 Jun 2020 10:52:49 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n[...]\n> But what you want is that if it does happen to go down for some reason before\n> all the WAL is streamed, you can bring it back up and finish streaming the\n> WAL without generating any new WAL.\n\nThanks to cascading replication, it could be very possible without this READ\nONLY mode, just in recovery mode, isn't it?\n\nRegards,\n\n\n",
"msg_date": "Thu, 18 Jun 2020 17:08:38 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 11:08 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> Thanks to cascading replication, it could be very possible without this READ\n> ONLY mode, just in recovery mode, isn't it?\n\nYeah, perhaps. I just wrote an email about that over on the demote\nthread, so I won't repeat it here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 11:23:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 8:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 18, 2020 at 5:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > For buffer replacement, many-a-times we have to also perform\n> > XLogFlush, what do we do for that? We can't proceed without doing\n> > that and erroring out from there means stopping read-only query from\n> > the user perspective.\n>\n> I think we should stop WAL writes, then XLogFlush() once, then declare\n> the system R/O. After that there might be more XLogFlush() calls but\n> there won't be any new WAL, so they won't do anything.\n>\nYeah, the proposed v1 patch does the same.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 19 Jun 2020 09:28:53 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi All,\n\nAttaching a new set of patches rebased atop the latest master head and includes\nthe following changes:\n\n1. Enabling ALTER SYSTEM READ { ONLY | WRITE } support for the single-user,\ndiscussed here [1]\n\n2. Now skipping the startup checkpoint if the system is read-only mode, as\ndiscussed [2].\n\n3. While changing the system state to READ-WRITE, a new checkpoint request will\nbe made.\n\nAll these changes are part of the v2-0004 patch and the rest of the patches will\nbe the same as the v1.\n\nRegards,\nAmul\n\n1] https://postgr.es/m/CAAJ_b96WPPt-=vyjpPUy8pG0vAvLgpjLukCZONUkvdR1_exrKA@mail.gmail.com\n2] https://postgr.es/m/CAAJ_b95hddJrgciCfri2NkTLdEUSz6zdMSjoDuWPFPBFvJy+Kg@mail.gmail.com",
"msg_date": "Mon, 22 Jun 2020 11:59:09 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On 6/22/20 11:59 AM, Amul Sul wrote:\n> 2. Now skipping the startup checkpoint if the system is read-only mode, as\n> discussed [2].\n\nI am not able to perform pg_checksums o/p after shutting down my server \nin read only mode .\n\nSteps -\n\n1.initdb (./initdb -k -D data)\n2.start the server(./pg_ctl -D data start)\n3.connect to psql (./psql postgres)\n4.Fire query (alter system read only;)\n5.shutdown the server(./pg_ctl -D data stop)\n6.pg_checksums\n\n[edb@tushar-ldap-docker bin]$ ./pg_checksums -D data\npg_checksums: error: cluster must be shut down\n[edb@tushar-ldap-docker bin]$\n\nResult - (when server is not in read only)\n\n[edb@tushar-ldap-docker bin]$ ./pg_checksums -D data\nChecksum operation completed\nFiles scanned: 916\nBlocks scanned: 2976\nBad checksums: 0\nData checksum version: 1\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Wed, 24 Jun 2020 13:54:29 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jun 24, 2020 at 1:54 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>\n> On 6/22/20 11:59 AM, Amul Sul wrote:\n> > 2. Now skipping the startup checkpoint if the system is read-only mode, as\n> > discussed [2].\n>\n> I am not able to perform pg_checksums o/p after shutting down my server\n> in read only mode .\n>\n> Steps -\n>\n> 1.initdb (./initdb -k -D data)\n> 2.start the server(./pg_ctl -D data start)\n> 3.connect to psql (./psql postgres)\n> 4.Fire query (alter system read only;)\n> 5.shutdown the server(./pg_ctl -D data stop)\n> 6.pg_checksums\n>\n> [edb@tushar-ldap-docker bin]$ ./pg_checksums -D data\n> pg_checksums: error: cluster must be shut down\n> [edb@tushar-ldap-docker bin]$\n>\n> Result - (when server is not in read only)\n>\n> [edb@tushar-ldap-docker bin]$ ./pg_checksums -D data\n> Checksum operation completed\n> Files scanned: 916\n> Blocks scanned: 2976\n> Bad checksums: 0\n> Data checksum version: 1\n>\nI think that's expected since the server isn't clean shutdown, similar error can\nbe seen with any server which has been shutdown in immediate mode\n(pg_clt -D data_dir -m i).\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 26 Jun 2020 10:11:41 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jun 24, 2020 at 01:54:29PM +0530, tushar wrote:\n> On 6/22/20 11:59 AM, Amul Sul wrote:\n> > 2. Now skipping the startup checkpoint if the system is read-only mode, as\n> > discussed [2].\n> \n> I am not able to perform pg_checksums o/p after shutting down my server in\n> read only� mode .\n> \n> Steps -\n> \n> 1.initdb (./initdb -k -D data)\n> 2.start the server(./pg_ctl -D data start)\n> 3.connect to psql (./psql postgres)\n> 4.Fire query (alter system read only;)\n> 5.shutdown the server(./pg_ctl -D data stop)\n> 6.pg_checksums\n> \n> [edb@tushar-ldap-docker bin]$ ./pg_checksums -D data\n> pg_checksums: error: cluster must be shut down\n> [edb@tushar-ldap-docker bin]$\n\nWhat's the 'Database cluster state' from pg_controldata at this point?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n",
"msg_date": "Fri, 26 Jun 2020 08:45:53 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jun 26, 2020 at 10:11:41AM +0530, Amul Sul wrote:\n> I think that's expected since the server isn't clean shutdown, similar error can\n> be seen with any server which has been shutdown in immediate mode\n> (pg_clt -D data_dir -m i).\n\nAny operation working on on-disk relation blocks needs to have a\nconsistent state, and a clean shutdown gives this guarantee thanks to\nthe shutdown checkpoint (see also pg_rewind). There are two states in\nthe control file, shutdown for a primary and shutdown while in\nrecovery to cover that. So if you stop the server cleanly but fail to\nsee a proper state with pg_checksums, it seems to me that the proposed\npatch does not handle correctly the state of the cluster in the\ncontrol file at shutdown. That's not good.\n--\nMichael",
"msg_date": "Fri, 26 Jun 2020 18:59:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jun 26, 2020 at 12:15 PM Michael Banck\n<michael.banck@credativ.de> wrote:\n>\n> Hi,\n>\n> On Wed, Jun 24, 2020 at 01:54:29PM +0530, tushar wrote:\n> > On 6/22/20 11:59 AM, Amul Sul wrote:\n> > > 2. Now skipping the startup checkpoint if the system is read-only mode, as\n> > > discussed [2].\n> >\n> > I am not able to perform pg_checksums o/p after shutting down my server in\n> > read only mode .\n> >\n> > Steps -\n> >\n> > 1.initdb (./initdb -k -D data)\n> > 2.start the server(./pg_ctl -D data start)\n> > 3.connect to psql (./psql postgres)\n> > 4.Fire query (alter system read only;)\n> > 5.shutdown the server(./pg_ctl -D data stop)\n> > 6.pg_checksums\n> >\n> > [edb@tushar-ldap-docker bin]$ ./pg_checksums -D data\n> > pg_checksums: error: cluster must be shut down\n> > [edb@tushar-ldap-docker bin]$\n>\n> What's the 'Database cluster state' from pg_controldata at this point?\n>\n\"in production\"\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 26 Jun 2020 16:09:47 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jun 26, 2020 at 5:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Any operation working on on-disk relation blocks needs to have a\n> consistent state, and a clean shutdown gives this guarantee thanks to\n> the shutdown checkpoint (see also pg_rewind). There are two states in\n> the control file, shutdown for a primary and shutdown while in\n> recovery to cover that. So if you stop the server cleanly but fail to\n> see a proper state with pg_checksums, it seems to me that the proposed\n> patch does not handle correctly the state of the cluster in the\n> control file at shutdown. That's not good.\n\nI think it is actually very good. If a feature that supposedly\nprevents writing WAL permitted a shutdown checkpoint to be written, it\nwould be failing to accomplish its design goal. There is not much of a\nuse case for a feature that stops WAL from being written except when\nit doesn't.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 26 Jun 2020 08:46:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is a rebased version for the latest master head[1].\n\nRegards,\nAmul\n\n1] Commit # 101f903e51f52bf595cd8177d2e0bc6fe9000762",
"msg_date": "Tue, 14 Jul 2020 12:07:28 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi All,\nI was testing the feature on top of v3 patch and found the \"pg_upgrade\"\nfailure after keeping \"alter system read only;\" as below:\n\n-- Steps:\n./initdb -D data\n./pg_ctl -D data -l logs start -c\n./psql postgres\nalter system read only;\n\\q\n./pg_ctl -D data -l logs stop -c\n\n./initdb -D data2\n./pg_upgrade -b . -B . -d data -D data2 -p 5555 -P 5520\n\n\n[edb@localhost bin]$ ./pg_upgrade -b . -B . -d data -D data2 -p 5555 -P 5520\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\n\nThe source cluster was not shut down cleanly.\nFailure, exiting\n\n--Below is the logs\n2021-07-16 11:04:20.305 IST [105788] LOG: starting PostgreSQL 14devel on\nx86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat\n4.8.5-39), 64-bit\n2020-07-16 11:04:20.309 IST [105788] LOG: listening on IPv6 address \"::1\",\nport 5432\n2020-07-16 11:04:20.309 IST [105788] LOG: listening on IPv4 address\n\"127.0.0.1\", port 5432\n2020-07-16 11:04:20.321 IST [105788] LOG: listening on Unix socket\n\"/tmp/.s.PGSQL.5432\"\n2020-07-16 11:04:20.347 IST [105789] LOG: database system was shut down at\n2020-07-16 11:04:20 IST\n2020-07-16 11:04:20.352 IST [105788] LOG: database system is ready to\naccept connections\n2020-07-16 11:04:20.534 IST [105790] LOG: system is now read only\n2020-07-16 11:04:20.542 IST [105788] LOG: received fast shutdown request\n2020-07-16 11:04:20.543 IST [105788] LOG: aborting any active transactions\n2020-07-16 11:04:20.544 IST [105788] LOG: background worker \"logical\nreplication launcher\" (PID 105795) exited with exit code 1\n2020-07-16 11:04:20.544 IST [105790] LOG: shutting down\n2020-07-16 11:04:20.544 IST [105790] LOG: skipping shutdown checkpoint\nbecause the system is read only\n2020-07-16 11:04:20.551 IST [105788] LOG: database system is shut down\n\nOn Tue, Jul 14, 2020 at 12:08 PM Amul Sul <sulamul@gmail.com> wrote:\n\n> Attached is a rebased version for the latest master head[1].\n>\n> Regards,\n> Amul\n>\n> 1] Commit # 101f903e51f52bf595cd8177d2e0bc6fe9000762\n>\n\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,I was testing the feature on top of v3 patch and found the \"pg_upgrade\" failure after keeping \"alter system read only;\" as below:-- Steps:./initdb -D data./pg_ctl -D data -l logs start -c./psql postgresalter system read only;\\q./pg_ctl -D data -l logs stop -c./initdb -D data2./pg_upgrade -b . -B . -d data -D data2 -p 5555 -P 5520[edb@localhost bin]$ ./pg_upgrade -b . -B . -d data -D data2 -p 5555 -P 5520Performing Consistency Checks-----------------------------Checking cluster versions okThe source cluster was not shut down cleanly.Failure, exiting--Below is the logs2021-07-16 11:04:20.305 IST [105788] LOG: starting PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit2020-07-16 11:04:20.309 IST [105788] LOG: listening on IPv6 address \"::1\", port 54322020-07-16 11:04:20.309 IST [105788] LOG: listening on IPv4 address \"127.0.0.1\", port 54322020-07-16 11:04:20.321 IST [105788] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"2020-07-16 11:04:20.347 IST [105789] LOG: database system was shut down at 2020-07-16 11:04:20 IST2020-07-16 11:04:20.352 IST [105788] LOG: database system is ready to accept connections2020-07-16 11:04:20.534 IST [105790] LOG: system is now read only2020-07-16 11:04:20.542 IST [105788] LOG: received fast shutdown request2020-07-16 11:04:20.543 IST [105788] LOG: aborting any active transactions2020-07-16 11:04:20.544 IST [105788] LOG: background worker \"logical replication launcher\" (PID 105795) exited with exit code 12020-07-16 11:04:20.544 IST [105790] LOG: shutting down2020-07-16 11:04:20.544 IST [105790] LOG: skipping shutdown checkpoint because the system is read only2020-07-16 11:04:20.551 IST [105788] LOG: database system is shut downOn Tue, Jul 14, 2020 at 12:08 PM Amul Sul <sulamul@gmail.com> wrote:Attached is a rebased version for the latest master head[1].\n\nRegards,\nAmul\n\n1] Commit # 101f903e51f52bf595cd8177d2e0bc6fe9000762\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 16 Jul 2020 11:41:54 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 16, 2020 at 2:12 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com>\nwrote:\n\n> Hi All,\n> I was testing the feature on top of v3 patch and found the \"pg_upgrade\"\n> failure after keeping \"alter system read only;\" as below:\n>\n\nThat's expected. You can't perform a clean shutdown without writing WAL.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Jul 16, 2020 at 2:12 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Hi All,I was testing the feature on top of v3 patch and found the \"pg_upgrade\" failure after keeping \"alter system read only;\" as below:That's expected. You can't perform a clean shutdown without writing WAL. -- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 16 Jul 2020 09:10:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hello,\n\nI think we should really term this feature, as it stands, as a means to\nsolely stop WAL writes from happening.\n\nThe feature doesn't truly make the system read-only (e.g. dirty buffer\nflushes may succeed the system being put into a read-only state), which\ndoes make it confusing to a degree.\n\nIdeally, if we were to have a read-only system, we should be able to run\npg_checksums on it, or take file-system snapshots etc, without the need\nto shut down the cluster. It would also enable an interesting use case:\nwe should also be able to do a live upgrade on any running cluster and\nentertain read-only queries at the same time, given that all the\ncluster's files will be immutable?\n\nSo if we are not going to address those cases, we should change the\nsyntax and remove the notion of read-only. It could be:\n\nALTER SYSTEM SET wal_writes TO off|on;\nor\nALTER SYSTEM SET prohibit_wal TO off|on;\n\nIf we are going to try to make it truly read-only, and cater to the\nother use cases, we have to:\n\nPerform a checkpoint before declaring the system read-only (i.e. before\nthe command returns). This may be expensive of course, as Andres has\npointed out in this thread, but it is a price that has to be paid. If we\ndo this checkpoint, then we can avoid an additional shutdown checkpoint\nand an end-of-recovery checkpoint (if we restart the primary after a\ncrash while in read-only mode). Also, we would have to prevent any\noperation that touches control files, which I am not sure we do today in\nthe current patch.\n\nWhy not have the best of both worlds? Consider:\n\nALTER SYSTEM SET read_only to {off, on, wal};\n\n-- on: wal writes off + no writes to disk\n-- off: default\n-- wal: only wal writes off\n\nOf course, there can probably be better syntax for the above.\n\nRegards,\n\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Wed, 22 Jul 2020 15:03:15 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "+1 to this feature and I have been thinking about it for sometime. There\nare several use cases with marking database read only (no transaction log\ngeneration). Some of the examples in a hosted service scenario are 1/ when\ncustomer runs out of storage space, 2/ Upgrading the server to a different\nmajor version (current server can be set to read only, new one can be built\nand then switch DNS), 3/ If user wants to force a database to read only and\nnot accept writes, may be for import / export a database.\n\nThanks,\nSatya\n\nOn Wed, Jul 22, 2020 at 3:04 PM Soumyadeep Chakraborty <\nsoumyadeep2007@gmail.com> wrote:\n\n> Hello,\n>\n> I think we should really term this feature, as it stands, as a means to\n> solely stop WAL writes from happening.\n>\n> The feature doesn't truly make the system read-only (e.g. dirty buffer\n> flushes may succeed the system being put into a read-only state), which\n> does make it confusing to a degree.\n>\n> Ideally, if we were to have a read-only system, we should be able to run\n> pg_checksums on it, or take file-system snapshots etc, without the need\n> to shut down the cluster. It would also enable an interesting use case:\n> we should also be able to do a live upgrade on any running cluster and\n> entertain read-only queries at the same time, given that all the\n> cluster's files will be immutable?\n>\n> So if we are not going to address those cases, we should change the\n> syntax and remove the notion of read-only. It could be:\n>\n> ALTER SYSTEM SET wal_writes TO off|on;\n> or\n> ALTER SYSTEM SET prohibit_wal TO off|on;\n>\n> If we are going to try to make it truly read-only, and cater to the\n> other use cases, we have to:\n>\n> Perform a checkpoint before declaring the system read-only (i.e. before\n> the command returns). This may be expensive of course, as Andres has\n> pointed out in this thread, but it is a price that has to be paid. If we\n> do this checkpoint, then we can avoid an additional shutdown checkpoint\n> and an end-of-recovery checkpoint (if we restart the primary after a\n> crash while in read-only mode). Also, we would have to prevent any\n> operation that touches control files, which I am not sure we do today in\n> the current patch.\n>\n> Why not have the best of both worlds? Consider:\n>\n> ALTER SYSTEM SET read_only to {off, on, wal};\n>\n> -- on: wal writes off + no writes to disk\n> -- off: default\n> -- wal: only wal writes off\n>\n> Of course, there can probably be better syntax for the above.\n>\n> Regards,\n>\n> Soumyadeep (VMware)\n>\n>\n>\n\n+1 to this feature and I have been thinking about it for sometime. There are several use cases with marking database read only (no transaction log generation). Some of the examples in a hosted service scenario are 1/ when customer runs out of storage space, 2/ Upgrading the server to a different major version (current server can be set to read only, new one can be built and then switch DNS), 3/ If user wants to force a database to read only and not accept writes, may be for import / export a database.Thanks,SatyaOn Wed, Jul 22, 2020 at 3:04 PM Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:Hello,\n\nI think we should really term this feature, as it stands, as a means to\nsolely stop WAL writes from happening.\n\nThe feature doesn't truly make the system read-only (e.g. dirty buffer\nflushes may succeed the system being put into a read-only state), which\ndoes make it confusing to a degree.\n\nIdeally, if we were to have a read-only system, we should be able to run\npg_checksums on it, or take file-system snapshots etc, without the need\nto shut down the cluster. It would also enable an interesting use case:\nwe should also be able to do a live upgrade on any running cluster and\nentertain read-only queries at the same time, given that all the\ncluster's files will be immutable?\n\nSo if we are not going to address those cases, we should change the\nsyntax and remove the notion of read-only. It could be:\n\nALTER SYSTEM SET wal_writes TO off|on;\nor\nALTER SYSTEM SET prohibit_wal TO off|on;\n\nIf we are going to try to make it truly read-only, and cater to the\nother use cases, we have to:\n\nPerform a checkpoint before declaring the system read-only (i.e. before\nthe command returns). This may be expensive of course, as Andres has\npointed out in this thread, but it is a price that has to be paid. If we\ndo this checkpoint, then we can avoid an additional shutdown checkpoint\nand an end-of-recovery checkpoint (if we restart the primary after a\ncrash while in read-only mode). Also, we would have to prevent any\noperation that touches control files, which I am not sure we do today in\nthe current patch.\n\nWhy not have the best of both worlds? Consider:\n\nALTER SYSTEM SET read_only to {off, on, wal};\n\n-- on: wal writes off + no writes to disk\n-- off: default\n-- wal: only wal writes off\n\nOf course, there can probably be better syntax for the above.\n\nRegards,\n\nSoumyadeep (VMware)",
"msg_date": "Wed, 22 Jul 2020 16:03:57 -0700",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi Amul,\n\nOn Tue, Jun 16, 2020 at 6:56 AM amul sul <sulamul@gmail.com> wrote:\n> The proposed feature is built atop of super barrier mechanism commit[1] to\n> coordinate\n> global state changes to all active backends. Backends which executed\n> ALTER SYSTEM READ { ONLY | WRITE } command places request to checkpointer\n> process to change the requested WAL read/write state aka WAL prohibited and\n> WAL\n> permitted state respectively. When the checkpointer process sees the WAL\n> prohibit\n> state change request, it emits a global barrier and waits until all\n> backends that\n> participate in the ProcSignal absorbs it.\n\nWhy should the checkpointer have the responsibility of setting the state\nof the system to read-only? Maybe this should be the postmaster's\nresponsibility - the checkpointer should just handle requests to\ncheckpoint. I think the backend requesting the read-only transition\nshould signal the postmaster, which in turn, will take on the aforesaid\nresponsibilities. The postmaster, could also additionally request a\ncheckpoint, using RequestCheckpoint() (if we want to support the\nread-onlyness discussed in [1]). checkpointer.c should not be touched by\nthis feature.\n\nFollowing on, any condition variable used by the backend to wait for the\nALTER SYSTEM command to finish (the patch uses\nCheckpointerShmem->readonly_cv), could be housed in ProcGlobal.\n\nRegards,\nSoumyadeep (VMware)\n\n[1] https://www.postgresql.org/message-id/CAE-ML%2B-zdWODAyWNs_Eu-siPxp_3PGbPkiSg%3DtoLeW9iS_eioA%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 22 Jul 2020 17:37:28 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 23, 2020 at 3:33 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n>\n> Hello,\n>\n> I think we should really term this feature, as it stands, as a means to\n> solely stop WAL writes from happening.\n>\n\nTrue.\n\n> The feature doesn't truly make the system read-only (e.g. dirty buffer\n> flushes may succeed the system being put into a read-only state), which\n> does make it confusing to a degree.\n>\n> Ideally, if we were to have a read-only system, we should be able to run\n> pg_checksums on it, or take file-system snapshots etc, without the need\n> to shut down the cluster. It would also enable an interesting use case:\n> we should also be able to do a live upgrade on any running cluster and\n> entertain read-only queries at the same time, given that all the\n> cluster's files will be immutable?\n>\n\nRead-only is for the queries.\n\nThe aim of this feature is preventing new WAL records from being generated, not\npreventing them from being flushed to disk, or streamed to standbys, or anything\nelse. The rest should happen as normal.\n\nIf you can't flush WAL, then you might not be able to evict some number of\nbuffers, which in the worst case could be large. That's because you can't evict\na dirty buffer until WAL has been flushed up to the buffer's LSN (otherwise,\nyou wouldn't be following the WAL-before-data rule). And having a potentially\nlarge number of unevictable buffers around sounds terrible, not only for\nperformance, but also for having the system keep working at all.\n\n> So if we are not going to address those cases, we should change the\n> syntax and remove the notion of read-only. It could be:\n>\n> ALTER SYSTEM SET wal_writes TO off|on;\n> or\n> ALTER SYSTEM SET prohibit_wal TO off|on;\n>\n> If we are going to try to make it truly read-only, and cater to the\n> other use cases, we have to:\n>\n> Perform a checkpoint before declaring the system read-only (i.e. before\n> the command returns). This may be expensive of course, as Andres has\n> pointed out in this thread, but it is a price that has to be paid. If we\n> do this checkpoint, then we can avoid an additional shutdown checkpoint\n> and an end-of-recovery checkpoint (if we restart the primary after a\n> crash while in read-only mode). Also, we would have to prevent any\n> operation that touches control files, which I am not sure we do today in\n> the current patch.\n>\n\nThe intention is to change the system to read-only ASAP; the checkpoint will\nmake it much slower.\n\nI don't think we can skip control file updates that need to make read-only\nstate persistent across the restart.\n\n> Why not have the best of both worlds? Consider:\n>\n> ALTER SYSTEM SET read_only to {off, on, wal};\n>\n> -- on: wal writes off + no writes to disk\n> -- off: default\n> -- wal: only wal writes off\n>\n> Of course, there can probably be better syntax for the above.\n>\n\nSure, thanks for the suggestions. Syntax change is not a harder part; we can\nchoose the better one later.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 23 Jul 2020 16:11:45 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 23, 2020 at 4:34 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> +1 to this feature and I have been thinking about it for sometime. There are several use cases with marking database read only (no transaction log generation). Some of the examples in a hosted service scenario are 1/ when customer runs out of storage space, 2/ Upgrading the server to a different major version (current server can be set to read only, new one can be built and then switch DNS), 3/ If user wants to force a database to read only and not accept writes, may be for import / export a database.\n>\nThanks for voting & listing the realistic use cases.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 23 Jul 2020 16:13:14 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 23, 2020 at 6:08 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n>\n> Hi Amul,\n>\n\nThanks, Soumyadeep for looking and putting your thoughts on the patch.\n\n> On Tue, Jun 16, 2020 at 6:56 AM amul sul <sulamul@gmail.com> wrote:\n> > The proposed feature is built atop of super barrier mechanism commit[1] to\n> > coordinate\n> > global state changes to all active backends. Backends which executed\n> > ALTER SYSTEM READ { ONLY | WRITE } command places request to checkpointer\n> > process to change the requested WAL read/write state aka WAL prohibited and\n> > WAL\n> > permitted state respectively. When the checkpointer process sees the WAL\n> > prohibit\n> > state change request, it emits a global barrier and waits until all\n> > backends that\n> > participate in the ProcSignal absorbs it.\n>\n> Why should the checkpointer have the responsibility of setting the state\n> of the system to read-only? Maybe this should be the postmaster's\n> responsibility - the checkpointer should just handle requests to\n> checkpoint.\n\nWell, once we've initiated the change to a read-only state, we probably want to\nalways either finish that change or go back to read-write, even if the process\nthat initiated the change is interrupted. Leaving the system in a\nhalf-way-in-between state long term seems bad. Maybe we would have put some\nbackground process, but choose the checkpointer in charge of making the state\nchange and to avoid the new background process to keep the first version patch\nsimple. The checkpointer isn't likely to get killed, but if it does, it will\nbe relaunched and the new one can clean things up. On the other hand, I agree\nmaking the checkpointer responsible for more than one thing might not\nbe a good idea\nbut I don't think the postmaster should do the work that any\nbackground process can\ndo.\n\n>I think the backend requesting the read-only transition\n> should signal the postmaster, which in turn, will take on the aforesaid\n> responsibilities. The postmaster, could also additionally request a\n> checkpoint, using RequestCheckpoint() (if we want to support the\n> read-onlyness discussed in [1]). checkpointer.c should not be touched by\n> this feature.\n>\n> Following on, any condition variable used by the backend to wait for the\n> ALTER SYSTEM command to finish (the patch uses\n> CheckpointerShmem->readonly_cv), could be housed in ProcGlobal.\n>\n\nRelevant only if we don't want to use the checkpointer process.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 23 Jul 2020 16:26:35 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 23, 2020 at 3:42 AM Amul Sul <sulamul@gmail.com> wrote:\n> The aim of this feature is preventing new WAL records from being generated, not\n> preventing them from being flushed to disk, or streamed to standbys, or anything\n> else. The rest should happen as normal.\n>\n> If you can't flush WAL, then you might not be able to evict some number of\n> buffers, which in the worst case could be large. That's because you can't evict\n> a dirty buffer until WAL has been flushed up to the buffer's LSN (otherwise,\n> you wouldn't be following the WAL-before-data rule). And having a potentially\n> large number of unevictable buffers around sounds terrible, not only for\n> performance, but also for having the system keep working at all.\n\nIn the read-only level I was suggesting, I wasn't suggesting that we\nstop WAL flushes, in fact we should flush the WAL before we mark the\nsystem as read-only. Once the system declares itself as read-only, it\nwill not perform any more on-disk changes; It may perform all the\nflushes it needs as a part of the read-only request handling.\n\nWAL should still stream to the secondary of course, even after you mark\nthe primary as read-only.\n\n> Read-only is for the queries.\n\nWhat I am saying is it doesn't have to be just the queries. I think we\ncan cater to all the other use cases simply by forcing a checkpoint\nbefore marking the system as read-only.\n\n> The intention is to change the system to read-only ASAP; the checkpoint will\n> make it much slower.\n\nI agree - if one needs that speed, then they can do the equivalent of:\nALTER SYSTEM SET read_only to 'wal';\nand the expensive checkpoint you mentioned can be avoided.\n\n> I don't think we can skip control file updates that need to make read-only\n> state persistent across the restart.\n\nI was referring to control file updates post the read-only state change.\nAny updates done as a part of the state change is totally cool.\n\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Thu, 23 Jul 2020 09:10:41 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 23, 2020 at 3:57 AM Amul Sul <sulamul@gmail.com> wrote:\n\n> Well, once we've initiated the change to a read-only state, we probably want to\n> always either finish that change or go back to read-write, even if the process\n> that initiated the change is interrupted. Leaving the system in a\n> half-way-in-between state long term seems bad. Maybe we would have put some\n> background process, but choose the checkpointer in charge of making the state\n> change and to avoid the new background process to keep the first version patch\n> simple. The checkpointer isn't likely to get killed, but if it does, it will\n> be relaunched and the new one can clean things up. On the other hand, I agree\n> making the checkpointer responsible for more than one thing might not\n> be a good idea\n> but I don't think the postmaster should do the work that any\n> background process can\n> do.\n\n+1 for doing it in a background process rather than in the backend\nitself (as we can't risk doing it in a backend as it can crash and won't\nrestart and clean up as a background process would).\n\nAs my co-worker pointed out to me, doing the work in the postmaster is a\nvery bad idea as we don't want delays in serving connection requests on\naccount of the barrier that comes with this patch.\n\nI would like to see this responsibility in a separate auxiliary process\nbut I guess having it in the checkpointer isn't the end of the world.\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Thu, 23 Jul 2020 17:56:37 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 7:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think we'd want the FIRST write operation to be the end-of-recovery\n> checkpoint, before the system is fully read-write. And then after that\n> completes you could do other things.\n\nI can't see why this is necessary from a correctness or performance\npoint of view. Maybe I'm missing something.\n\nIn case it is necessary, the patch set does not wait for the checkpoint to\ncomplete before marking the system as read-write. Refer:\n\n/* Set final state by clearing in-progress flag bit */\nif (SetWALProhibitState(wal_state & ~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))\n{\n if ((wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0)\n ereport(LOG, (errmsg(\"system is now read only\")));\n else\n {\n /* Request checkpoint */\n RequestCheckpoint(CHECKPOINT_IMMEDIATE);\n ereport(LOG, (errmsg(\"system is now read write\")));\n }\n}\n\nWe should RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT) before\nwe SetWALProhibitState() and do the ereport(), if we have a read-write\nstate change request.\n\nAlso, we currently request this checkpoint even if there was no startup\nrecovery and we don't set CHECKPOINT_END_OF_RECOVERY in the case where\nthe read-write request does follow a startup recovery.\nSo it should really be:\nRequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT |\nCHECKPOINT_END_OF_RECOVERY);\nWe would need to convey that an end-of-recovery-checkpoint is pending in\nshmem somehow (and only if one such checkpoint is pending, should we do\nit as a part of the read-write request handling).\nMaybe we can set CHECKPOINT_END_OF_RECOVERY in ckpt_flags where we do:\n/*\n * Skip end-of-recovery checkpoint if the system is in WAL prohibited state.\n */\nand then check for that.\n\nSome minor comments about the code (some of them probably doesn't\nwarrant immediate attention, but for the record...):\n\n1. There are some places where we can use a local variable to store the\nresult of RelationNeedsWAL() to avoid repeated calls to it. E.g.\nbrin_doupdate()\n\n2. Similarly, we can also capture the calls to GetWALProhibitState() in\na local variable where applicable. E.g. inside WALProhibitRequest().\n\n3. Some of the functions that were added such as GetWALProhibitState(),\nIsWALProhibited() etc could be declared static inline.\n\n4. IsWALProhibited(): Shouldn't it really be:\nbool\nIsWALProhibited(void)\n{\n uint32 walProhibitState = GetWALProhibitState();\n return (walProhibitState & WALPROHIBIT_STATE_READ_ONLY) != 0\n && (walProhibitState & WALPROHIBIT_TRANSITION_IN_PROGRESS) == 0;\n}\n\n5. I think the comments:\n/* Must be performing an INSERT or UPDATE, so we'll have an XID */\nand\n/* Can reach here from VACUUM, so need not have an XID */\ncan be internalized in the function/macro comment header.\n\n6. Typo: ConditionVariable readonly_cv; /* signaled when ckpt_started\nadvances */\nWe need to update the comment here.\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Thu, 23 Jul 2020 17:58:18 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\n> From f0188a48723b1ae7372bcc6a344ed7868fdc40fb Mon Sep 17 00:00:00 2001\n> From: Amul Sul <amul.sul@enterprisedb.com>\n> Date: Fri, 27 Mar 2020 05:05:38 -0400\n> Subject: [PATCH v3 2/6] Add alter system read only/write syntax\n> \n> Note that syntax doesn't have any implementation.\n> ---\n> src/backend/nodes/copyfuncs.c | 12 ++++++++++++\n> src/backend/nodes/equalfuncs.c | 9 +++++++++\n> src/backend/parser/gram.y | 13 +++++++++++++\n> src/backend/tcop/utility.c | 20 ++++++++++++++++++++\n> src/bin/psql/tab-complete.c | 6 ++++--\n> src/include/nodes/nodes.h | 1 +\n> src/include/nodes/parsenodes.h | 10 ++++++++++\n> src/tools/pgindent/typedefs.list | 1 +\n> 8 files changed, 70 insertions(+), 2 deletions(-)\n\nShouldn't there be at outfuncs support as well? Perhaps we even need\nreadfuncs, not immediately sure.\n\n\n> From 2c5db7db70d4cebebf574fbc47db7fbf7c440be1 Mon Sep 17 00:00:00 2001\n> From: Amul Sul <amul.sul@enterprisedb.com>\n> Date: Fri, 19 Jun 2020 06:29:36 -0400\n> Subject: [PATCH v3 3/6] Implement ALTER SYSTEM READ ONLY using global barrier.\n> \n> Implementation:\n> \n> 1. When a user tried to change server state to WAL-Prohibited using\n> ALTER SYSTEM READ ONLY command; AlterSystemSetWALProhibitState() will emit\n> PROCSIGNAL_BARRIER_WAL_PROHIBIT_STATE_CHANGE barrier and will wait until the\n> barrier has been absorbed by all the backends.\n> \n> 2. When a backend receives the WAL-Prohibited barrier, at that moment if\n> it is already in a transaction and the transaction already assigned XID,\n> then the backend will be killed by throwing FATAL(XXX: need more discussion\n> on this)\n\nI think we should consider introducing XACTFATAL or such, guaranteeing\nthe transaction gets aborted, without requiring a FATAL. This has been\nneeded for enough cases that it's worthwhile.\n\n\nThere are several cases where we WAL log without having an xid\nassigned. E.g. when HOT pruning during syscache lookups or such. Are\nthere any cases where the check for being in recovery is followed by a\nCHECK_FOR_INTERRUPTS, before the WAL logging is done?\n\n\n> 3. Otherwise, if that backend running transaction which yet to get XID\n> assigned we don't need to do anything special, simply call\n> ResetLocalXLogInsertAllowed() so that any future WAL insert in will check\n> XLogInsertAllowed() first which set ready only state appropriately.\n> \n> 4. A new transaction (from existing or new backend) starts as a read-only\n> transaction.\n\nWhy do we need 4)? And doesn't that have the potential to be\nunnecessarily problematic if a the server is subsequently brought out of\nthe readonly state again?\n\n\n> 5. Auxiliary processes like autovacuum launcher, background writer,\n> checkpointer and walwriter will don't do anything in WAL-Prohibited\n> server state until someone wakes us up. E.g. a backend might later on\n> request us to put the system back to read-write.\n\nHm. It's not at all clear to me why bgwriter and walwriter shouldn't do\nanything in this state. bgwriter for example is even running entirely\nnormally in a hot standby node?\n\n\n> 6. At shutdown in WAL-Prohibited mode, we'll skip shutdown checkpoint\n> and xlog rotation. Starting up again will perform crash recovery(XXX:\n> need some discussion on this as well)\n> \n> 7. ALTER SYSTEM READ ONLY/WRITE is restricted on standby server.\n> \n> 8. Only super user can toggle WAL-Prohibit state.\n> \n> 9. Add system_is_read_only GUC show the system state -- will true when system\n> is wal prohibited or in recovery.\n\n\n\n> +/*\n> + * AlterSystemSetWALProhibitState\n> + *\n> + * Execute ALTER SYSTEM READ { ONLY | WRITE } statement.\n> + */\n> +void\n> +AlterSystemSetWALProhibitState(AlterSystemWALProhibitState *stmt)\n> +{\n> +\tif (!superuser())\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> +\t\t\t\t errmsg(\"must be superuser to execute ALTER SYSTEM command\")));\n\nISTM we should rather do this in a GRANTable manner. We've worked\nsubstantially towards that in the last few years.\n\n\n> \n> +\t/*\n> +\t * WALProhibited indicates if we have stopped allowing WAL writes.\n> +\t * Protected by info_lck.\n> +\t */\n> +\tbool\t\tWALProhibited;\n> +\n> \t/*\n> \t * SharedHotStandbyActive indicates if we allow hot standby queries to be\n> \t * run. Protected by info_lck.\n> @@ -7962,6 +7969,25 @@ StartupXLOG(void)\n> \t\tRequestCheckpoint(CHECKPOINT_FORCE);\n> }\n> \n> +void\n> +MakeReadOnlyXLOG(void)\n> +{\n> +\tSpinLockAcquire(&XLogCtl->info_lck);\n> +\tXLogCtl->WALProhibited = true;\n> +\tSpinLockRelease(&XLogCtl->info_lck);\n> +}\n> +\n> +/*\n> + * Is the system still in WAL prohibited state?\n> + */\n> +bool\n> +IsWALProhibited(void)\n> +{\n> +\tvolatile XLogCtlData *xlogctl = XLogCtl;\n> +\n> +\treturn xlogctl->WALProhibited;\n> +}\n\nWhat does this kind of locking achieving? It doesn't protect against\nconcurrent ALTER SYSTEM SET READ ONLY or such?\n\n\n> +\t\t/*\n> +\t\t * If the server is in WAL-Prohibited state then don't do anything until\n> +\t\t * someone wakes us up. E.g. a backend might later on request us to put\n> +\t\t * the system back to read-write.\n> +\t\t */\n> +\t\tif (IsWALProhibited())\n> +\t\t{\n> +\t\t\t(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, -1,\n> +\t\t\t\t\t\t\t WAIT_EVENT_CHECKPOINTER_MAIN);\n> +\t\t\tcontinue;\n> +\t\t}\n> +\n> \t\t/*\n> \t\t * Detect a pending checkpoint request by checking whether the flags\n> \t\t * word in shared memory is nonzero. We shouldn't need to acquire the\n\nSo if the ASRO happens while a checkpoint, potentially with a\ncheckpoint_timeout = 60d, it'll not take effect until the checkpoint has\nfinished.\n\nBut uh, as far as I can tell, the code would simply continue an\nin-progress checkpoint, despite having absorbed the barrier. And then\nwe'd PANIC when doing the XLogInsert()?\n\n\n\n> diff --git a/src/include/access/walprohibit.h b/src/include/access/walprohibit.h\n> new file mode 100644\n> index 00000000000..619c33cd780\n> --- /dev/null\n> +++ b/src/include/access/walprohibit.h\n\nNot sure I like the mix of xlog/wal prefix for pretty closely related\nfiles... I'm not convinced it's worth having a separate file for this,\nfwiw.\n\n\n\n> From 5600adc647bd729e4074ecf13e97b9f297e9d5c6 Mon Sep 17 00:00:00 2001\n> From: Amul Sul <amul.sul@enterprisedb.com>\n> Date: Fri, 15 May 2020 06:39:43 -0400\n> Subject: [PATCH v3 4/6] Use checkpointer to make system READ-ONLY or\n> READ-WRITE\n> \n> Till the previous commit, the backend used to do this, but now the backend\n> requests checkpointer to do it. Checkpointer, noticing that the current state\n> is has WALPROHIBIT_TRANSITION_IN_PROGRESS flag set, does the barrier request,\n> and then acknowledges back to the backend who requested the state change.\n> \n> Note that this commit also enables ALTER SYSTEM READ WRITE support and make WAL\n> prohibited state persistent across the system restarts.\n\nThe split between the previous commit and this commit seems more\nconfusing than useful to me.\n\n> +/*\n> + * WALProhibitedRequest: Request checkpointer to make the WALProhibitState to\n> + * read-only.\n> + */\n> +void\n> +WALProhibitRequest(void)\n> +{\n> +\t/* Must not be called from checkpointer */\n> +\tAssert(!AmCheckpointerProcess());\n> +\tAssert(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> +\n> +\t/*\n> +\t * If in a standalone backend, just do it ourselves.\n> +\t */\n> +\tif (!IsPostmasterEnvironment)\n> +\t{\n> +\t\tperformWALProhibitStateChange(GetWALProhibitState());\n> +\t\treturn;\n> +\t}\n> +\n> +\tif (CheckpointerShmem->checkpointer_pid == 0)\n> +\t\telog(ERROR, \"checkpointer is not running\");\n> +\n> +\tif (kill(CheckpointerShmem->checkpointer_pid, SIGINT) != 0)\n> +\t\telog(ERROR, \"could not signal checkpointer: %m\");\n> +\n> +\t/* Wait for the state to change to read-only */\n> +\tConditionVariablePrepareToSleep(&CheckpointerShmem->readonly_cv);\n> +\tfor (;;)\n> +\t{\n> +\t\t/* We'll be done once in-progress flag bit is cleared */\n> +\t\tif (!(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> +\t\t\tbreak;\n> +\n> +\t\telog(DEBUG1, \"WALProhibitRequest: Waiting for checkpointer\");\n> +\t\tConditionVariableSleep(&CheckpointerShmem->readonly_cv,\n> +\t\t\t\t\t\t\t WAIT_EVENT_SYSTEM_WALPROHIBIT_STATE_CHANGE);\n> +\t}\n> +\tConditionVariableCancelSleep();\n> +\telog(DEBUG1, \"Done WALProhibitRequest\");\n> +}\n\nIsn't it possible that the system could have been changed back to be\nread-write by the time the wakeup is being processed?\n\n\n\n> From 0b7426fc4708cc0e4ad333da3b35e473658bba28 Mon Sep 17 00:00:00 2001\n> From: Amul Sul <amul.sul@enterprisedb.com>\n> Date: Tue, 14 Jul 2020 02:10:55 -0400\n> Subject: [PATCH v3 5/6] Error or Assert before START_CRIT_SECTION for WAL\n> write\n\nIsn't that the wrong order? This needs to come before the feature is\nenabled, no?\n\n\n\n> @@ -758,6 +759,9 @@ brinbuildempty(Relation index)\n> \t\tReadBufferExtended(index, INIT_FORKNUM, P_NEW, RBM_NORMAL, NULL);\n> \tLockBuffer(metabuf, BUFFER_LOCK_EXCLUSIVE);\n> \n> +\t/* Building indexes will have an XID */\n> +\tAssertWALPermitted_HaveXID();\n> +\n\nUgh, that's a pretty ugly naming scheme mix.\n\n\n\n> @@ -176,6 +177,10 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange,\n> \tif (((BrinPageFlags(oldpage) & BRIN_EVACUATE_PAGE) == 0) &&\n> \t\tbrin_can_do_samepage_update(oldbuf, origsz, newsz))\n> \t{\n> +\t\t/* Can reach here from VACUUM, so need not have an XID */\n> +\t\tif (RelationNeedsWAL(idxrel))\n> +\t\t\tCheckWALPermitted();\n> +\n\nHm. Maybe I am confused, but why is that dependent on\nRelationNeedsWAL()? Shouldn't readonly actually mean readonly, even if\nno WAL is emitted?\n\n\n> #include \"access/genam.h\"\n> #include \"access/gist_private.h\"\n> #include \"access/transam.h\"\n> +#include \"access/walprohibit.h\"\n> #include \"commands/vacuum.h\"\n> #include \"lib/integerset.h\"\n> #include \"miscadmin.h\"\n\nThe number of places that now need this new header - pretty much the\nsame set of files that do XLogInsert, already requiring an xlog* header\nto be included - drives me further towards the conclusion that it's not\na good idea to have it separate.\n\n> extern void ProcessInterrupts(void);\n> \n> +#ifdef USE_ASSERT_CHECKING\n> +typedef enum\n> +{\n> +\tWALPERMIT_UNCHECKED,\n> +\tWALPERMIT_CHECKED,\n> +\tWALPERMIT_CHECKED_AND_USED\n> +} WALPermitCheckState;\n> +\n> +/* in access/walprohibit.c */\n> +extern WALPermitCheckState walpermit_checked_state;\n> +\n> +/*\n> + * Reset walpermit_checked flag when no longer in the critical section.\n> + * Otherwise, marked checked and used.\n> + */\n> +#define RESET_WALPERMIT_CHECKED_STATE() \\\n> +do { \\\n> +\twalpermit_checked_state = CritSectionCount ? \\\n> +\tWALPERMIT_CHECKED_AND_USED : WALPERMIT_UNCHECKED; \\\n> +} while(0)\n> +#else\n> +#define RESET_WALPERMIT_CHECKED_STATE() ((void) 0)\n> +#endif\n> +\n\nWhy are these in headers? And why is this tied to CritSectionCount?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 Jul 2020 19:04:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jul 24, 2020 at 6:28 AM Soumyadeep Chakraborty <\nsoumyadeep2007@gmail.com> wrote:\n>\n> On Thu, Jun 18, 2020 at 7:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I think we'd want the FIRST write operation to be the end-of-recovery\n> > checkpoint, before the system is fully read-write. And then after that\n> > completes you could do other things.\n>\n> I can't see why this is necessary from a correctness or performance\n> point of view. Maybe I'm missing something.\n>\n> In case it is necessary, the patch set does not wait for the checkpoint to\n> complete before marking the system as read-write. Refer:\n>\n> /* Set final state by clearing in-progress flag bit */\n> if (SetWALProhibitState(wal_state &\n~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))\n> {\n> if ((wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0)\n> ereport(LOG, (errmsg(\"system is now read only\")));\n> else\n> {\n> /* Request checkpoint */\n> RequestCheckpoint(CHECKPOINT_IMMEDIATE);\n> ereport(LOG, (errmsg(\"system is now read write\")));\n> }\n> }\n>\n> We should RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT) before\n> we SetWALProhibitState() and do the ereport(), if we have a read-write\n> state change request.\n>\n+1, I too have the same question.\n\nFWIW, I don't we can request CHECKPOINT_WAIT for this place, otherwise, it\nthink\nit will be deadlock case -- checkpointer process waiting for itself.\n\n> Also, we currently request this checkpoint even if there was no startup\n> recovery and we don't set CHECKPOINT_END_OF_RECOVERY in the case where\n> the read-write request does follow a startup recovery.\n> So it should really be:\n> RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT |\n> CHECKPOINT_END_OF_RECOVERY);\n> We would need to convey that an end-of-recovery-checkpoint is pending in\n> shmem somehow (and only if one such checkpoint is pending, should we do\n> it as a part of the read-write request handling).\n> Maybe we can set CHECKPOINT_END_OF_RECOVERY in ckpt_flags where we do:\n> /*\n> * Skip end-of-recovery checkpoint if the system is in WAL prohibited\nstate.\n> */\n> and then check for that.\n>\nYep, we need some indication that end-of-recovery was skipped at the\nstartup,\nbut I haven't added that since I wasn't sure do we really need\nCHECKPOINT_END_OF_RECOVERY as part of the previous concern?\n\n> Some minor comments about the code (some of them probably doesn't\n> warrant immediate attention, but for the record...):\n>\n> 1. There are some places where we can use a local variable to store the\n> result of RelationNeedsWAL() to avoid repeated calls to it. E.g.\n> brin_doupdate()\n>\nOk.\n\n> 2. Similarly, we can also capture the calls to GetWALProhibitState() in\n> a local variable where applicable. E.g. inside WALProhibitRequest().\n>\nI don't think so.\n\n> 3. Some of the functions that were added such as GetWALProhibitState(),\n> IsWALProhibited() etc could be declared static inline.\n>\nIsWALProhibited() can be static but not GetWALProhibitState() since it\nneeded to\nbe accessible from other files.\n\n> 4. IsWALProhibited(): Shouldn't it really be:\n> bool\n> IsWALProhibited(void)\n> {\n> uint32 walProhibitState = GetWALProhibitState();\n> return (walProhibitState & WALPROHIBIT_STATE_READ_ONLY) != 0\n> && (walProhibitState & WALPROHIBIT_TRANSITION_IN_PROGRESS) == 0;\n> }\n>\nI think the current one is better, this allows read-write transactions from\nexisting backend which has absorbed barrier or from new backend while we\nchanging stated to read-write in the assumption that we never fallback.\n\n> 5. I think the comments:\n> /* Must be performing an INSERT or UPDATE, so we'll have an XID */\n> and\n> /* Can reach here from VACUUM, so need not have an XID */\n> can be internalized in the function/macro comment header.\n>\nOk.\n\n> 6. Typo: ConditionVariable readonly_cv; /* signaled when ckpt_started\n> advances */\n> We need to update the comment here.\n>\nOk.\n\nWill try to address all the above review comments in the next version along\nwith\nAndres' concern/suggestion. Thanks again for your time.\n\nRegards,\nAmul\n\nOn Fri, Jul 24, 2020 at 6:28 AM Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:>> On Thu, Jun 18, 2020 at 7:54 AM Robert Haas <robertmhaas@gmail.com> wrote:> > I think we'd want the FIRST write operation to be the end-of-recovery> > checkpoint, before the system is fully read-write. And then after that> > completes you could do other things.>> I can't see why this is necessary from a correctness or performance> point of view. Maybe I'm missing something.>> In case it is necessary, the patch set does not wait for the checkpoint to> complete before marking the system as read-write. Refer:>> /* Set final state by clearing in-progress flag bit */> if (SetWALProhibitState(wal_state & ~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))> {> if ((wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0)> ereport(LOG, (errmsg(\"system is now read only\")));> else> {> /* Request checkpoint */> RequestCheckpoint(CHECKPOINT_IMMEDIATE);> ereport(LOG, (errmsg(\"system is now read write\")));> }> }>> We should RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT) before> we SetWALProhibitState() and do the ereport(), if we have a read-write> state change request.>+1, I too have the same question.FWIW, I don't we can request CHECKPOINT_WAIT for this place, otherwise, it thinkit will be deadlock case -- checkpointer process waiting for itself.> Also, we currently request this checkpoint even if there was no startup> recovery and we don't set CHECKPOINT_END_OF_RECOVERY in the case where> the read-write request does follow a startup recovery.> So it should really be:> RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT |> CHECKPOINT_END_OF_RECOVERY);> We would need to convey that an end-of-recovery-checkpoint is pending in> shmem somehow (and only if one such checkpoint is pending, should we do> it as a part of the read-write request handling).> Maybe we can set CHECKPOINT_END_OF_RECOVERY in ckpt_flags where we do:> /*> * Skip end-of-recovery checkpoint if the system is in WAL prohibited state.> */> and then check for that.>Yep, we need some indication that end-of-recovery was skipped at the startup,but I haven't added that since I wasn't sure do we really needCHECKPOINT_END_OF_RECOVERY as part of the previous concern?> Some minor comments about the code (some of them probably doesn't> warrant immediate attention, but for the record...):>> 1. There are some places where we can use a local variable to store the> result of RelationNeedsWAL() to avoid repeated calls to it. E.g.> brin_doupdate()>Ok.> 2. Similarly, we can also capture the calls to GetWALProhibitState() in> a local variable where applicable. E.g. inside WALProhibitRequest().>I don't think so.> 3. Some of the functions that were added such as GetWALProhibitState(),> IsWALProhibited() etc could be declared static inline.>IsWALProhibited() can be static but not GetWALProhibitState() since it needed tobe accessible from other files.> 4. IsWALProhibited(): Shouldn't it really be:> bool> IsWALProhibited(void)> {> uint32 walProhibitState = GetWALProhibitState();> return (walProhibitState & WALPROHIBIT_STATE_READ_ONLY) != 0> && (walProhibitState & WALPROHIBIT_TRANSITION_IN_PROGRESS) == 0;> }>I think the current one is better, this allows read-write transactions fromexisting backend which has absorbed barrier or from new backend while wechanging stated to read-write in the assumption that we never fallback.> 5. I think the comments:> /* Must be performing an INSERT or UPDATE, so we'll have an XID */> and> /* Can reach here from VACUUM, so need not have an XID */> can be internalized in the function/macro comment header.>Ok.> 6. Typo: ConditionVariable readonly_cv; /* signaled when ckpt_started> advances */> We need to update the comment here.>Ok.Will try to address all the above review comments in the next version along withAndres' concern/suggestion. Thanks again for your time.Regards,Amul",
"msg_date": "Fri, 24 Jul 2020 10:43:54 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jul 24, 2020 at 7:34 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n\nThanks for looking at the patch.\n\n>\n> > From f0188a48723b1ae7372bcc6a344ed7868fdc40fb Mon Sep 17 00:00:00 2001\n> > From: Amul Sul <amul.sul@enterprisedb.com>\n> > Date: Fri, 27 Mar 2020 05:05:38 -0400\n> > Subject: [PATCH v3 2/6] Add alter system read only/write syntax\n> >\n> > Note that syntax doesn't have any implementation.\n> > ---\n> > src/backend/nodes/copyfuncs.c | 12 ++++++++++++\n> > src/backend/nodes/equalfuncs.c | 9 +++++++++\n> > src/backend/parser/gram.y | 13 +++++++++++++\n> > src/backend/tcop/utility.c | 20 ++++++++++++++++++++\n> > src/bin/psql/tab-complete.c | 6 ++++--\n> > src/include/nodes/nodes.h | 1 +\n> > src/include/nodes/parsenodes.h | 10 ++++++++++\n> > src/tools/pgindent/typedefs.list | 1 +\n> > 8 files changed, 70 insertions(+), 2 deletions(-)\n>\n> Shouldn't there be at outfuncs support as well? Perhaps we even need\n> readfuncs, not immediately sure.\n\nOk, can add that as well.\n\n>\n>\n>\n> > From 2c5db7db70d4cebebf574fbc47db7fbf7c440be1 Mon Sep 17 00:00:00 2001\n> > From: Amul Sul <amul.sul@enterprisedb.com>\n> > Date: Fri, 19 Jun 2020 06:29:36 -0400\n> > Subject: [PATCH v3 3/6] Implement ALTER SYSTEM READ ONLY using global barrier.\n> >\n> > Implementation:\n> >\n> > 1. When a user tried to change server state to WAL-Prohibited using\n> > ALTER SYSTEM READ ONLY command; AlterSystemSetWALProhibitState() will emit\n> > PROCSIGNAL_BARRIER_WAL_PROHIBIT_STATE_CHANGE barrier and will wait until the\n> > barrier has been absorbed by all the backends.\n> >\n> > 2. When a backend receives the WAL-Prohibited barrier, at that moment if\n> > it is already in a transaction and the transaction already assigned XID,\n> > then the backend will be killed by throwing FATAL(XXX: need more discussion\n> > on this)\n>\n> I think we should consider introducing XACTFATAL or such, guaranteeing\n> the transaction gets aborted, without requiring a FATAL. This has been\n> needed for enough cases that it's worthwhile.\n>\n\nAs I am aware of, the existing code PostgresMain() uses FATAL to terminate\nthe connection when protocol synchronization was lost. Currently, in\na proposal, this and another one is \"Terminate the idle sessions\"[1] is using\nFATAL, afaik.\n\n>\n> There are several cases where we WAL log without having an xid\n> assigned. E.g. when HOT pruning during syscache lookups or such. Are\n> there any cases where the check for being in recovery is followed by a\n> CHECK_FOR_INTERRUPTS, before the WAL logging is done?\n>\n\nIn case of operation without xid, an error will be raised just before the point\nwhere the wal record is expected. The places you are asking about, I haven't\nfound in a glance, will try to search for that, but I am sure current\nimplementation is not missing those places where it is supposed to check the\nprohibited state and complaint.\n\nQuick question, is it possible that pruning will happen with the SELECT query?\nIt would be helpful if you or someone else could point me to the place where WAL\ncan be generated even in the case of read-only queries.\n\n>\n>\n> > 3. Otherwise, if that backend running transaction which yet to get XID\n> > assigned we don't need to do anything special, simply call\n> > ResetLocalXLogInsertAllowed() so that any future WAL insert in will check\n> > XLogInsertAllowed() first which set ready only state appropriately.\n> >\n> > 4. A new transaction (from existing or new backend) starts as a read-only\n> > transaction.\n>\n> Why do we need 4)? And doesn't that have the potential to be\n> unnecessarily problematic if a the server is subsequently brought out of\n> the readonly state again?\n\nThe transaction that was started in the read-only system state will be read-only\nuntil the end. I think that shouldn't be too problematic.\n\n>\n>\n> > 5. Auxiliary processes like autovacuum launcher, background writer,\n> > checkpointer and walwriter will don't do anything in WAL-Prohibited\n> > server state until someone wakes us up. E.g. a backend might later on\n> > request us to put the system back to read-write.\n>\n> Hm. It's not at all clear to me why bgwriter and walwriter shouldn't do\n> anything in this state. bgwriter for example is even running entirely\n> normally in a hot standby node?\n\nI think I missed to update the description when I reverted the\nwalwriter changes. The current version doesn't have any changes to\nthe walwriter. And bgwriter too behaves the same as it on the recovery\nsystem. Will update this, sorry for the confusion.\n\n>\n>\n> > 6. At shutdown in WAL-Prohibited mode, we'll skip shutdown checkpoint\n> > and xlog rotation. Starting up again will perform crash recovery(XXX:\n> > need some discussion on this as well)\n> >\n> > 7. ALTER SYSTEM READ ONLY/WRITE is restricted on standby server.\n> >\n> > 8. Only super user can toggle WAL-Prohibit state.\n> >\n> > 9. Add system_is_read_only GUC show the system state -- will true when system\n> > is wal prohibited or in recovery.\n>\n>\n>\n> > +/*\n> > + * AlterSystemSetWALProhibitState\n> > + *\n> > + * Execute ALTER SYSTEM READ { ONLY | WRITE } statement.\n> > + */\n> > +void\n> > +AlterSystemSetWALProhibitState(AlterSystemWALProhibitState *stmt)\n> > +{\n> > + if (!superuser())\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > + errmsg(\"must be superuser to execute ALTER SYSTEM command\")));\n>\n> ISTM we should rather do this in a GRANTable manner. We've worked\n> substantially towards that in the last few years.\n>\n\nI added this to be inlined with AlterSystemSetConfigFile(), if we want a\nGRANTable manner, will try that.\n\n>\n>\n> >\n> > + /*\n> > + * WALProhibited indicates if we have stopped allowing WAL writes.\n> > + * Protected by info_lck.\n> > + */\n> > + bool WALProhibited;\n> > +\n> > /*\n> > * SharedHotStandbyActive indicates if we allow hot standby queries to be\n> > * run. Protected by info_lck.\n> > @@ -7962,6 +7969,25 @@ StartupXLOG(void)\n> > RequestCheckpoint(CHECKPOINT_FORCE);\n> > }\n> >\n> > +void\n> > +MakeReadOnlyXLOG(void)\n> > +{\n> > + SpinLockAcquire(&XLogCtl->info_lck);\n> > + XLogCtl->WALProhibited = true;\n> > + SpinLockRelease(&XLogCtl->info_lck);\n> > +}\n> > +\n> > +/*\n> > + * Is the system still in WAL prohibited state?\n> > + */\n> > +bool\n> > +IsWALProhibited(void)\n> > +{\n> > + volatile XLogCtlData *xlogctl = XLogCtl;\n> > +\n> > + return xlogctl->WALProhibited;\n> > +}\n>\n> What does this kind of locking achieving? It doesn't protect against\n> concurrent ALTER SYSTEM SET READ ONLY or such?\n>\n\nThe 0004 patch improves that.\n\n>\n>\n> > + /*\n> > + * If the server is in WAL-Prohibited state then don't do anything until\n> > + * someone wakes us up. E.g. a backend might later on request us to put\n> > + * the system back to read-write.\n> > + */\n> > + if (IsWALProhibited())\n> > + {\n> > + (void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, -1,\n> > + WAIT_EVENT_CHECKPOINTER_MAIN);\n> > + continue;\n> > + }\n> > +\n> > /*\n> > * Detect a pending checkpoint request by checking whether the flags\n> > * word in shared memory is nonzero. We shouldn't need to acquire the\n>\n> So if the ASRO happens while a checkpoint, potentially with a\n> checkpoint_timeout = 60d, it'll not take effect until the checkpoint has\n> finished.\n>\n> But uh, as far as I can tell, the code would simply continue an\n> in-progress checkpoint, despite having absorbed the barrier. And then\n> we'd PANIC when doing the XLogInsert()?\n\nI think this might not be the case with the next checkpointer changes in the\n0004 patch.\n\n>\n> > diff --git a/src/include/access/walprohibit.h b/src/include/access/walprohibit.h\n> > new file mode 100644\n> > index 00000000000..619c33cd780\n> > --- /dev/null\n> > +++ b/src/include/access/walprohibit.h\n>\n> Not sure I like the mix of xlog/wal prefix for pretty closely related\n> files... I'm not convinced it's worth having a separate file for this,\n> fwiw.\n\nI see.\n\n>\n>\n>\n> > From 5600adc647bd729e4074ecf13e97b9f297e9d5c6 Mon Sep 17 00:00:00 2001\n> > From: Amul Sul <amul.sul@enterprisedb.com>\n> > Date: Fri, 15 May 2020 06:39:43 -0400\n> > Subject: [PATCH v3 4/6] Use checkpointer to make system READ-ONLY or\n> > READ-WRITE\n> >\n> > Till the previous commit, the backend used to do this, but now the backend\n> > requests checkpointer to do it. Checkpointer, noticing that the current state\n> > is has WALPROHIBIT_TRANSITION_IN_PROGRESS flag set, does the barrier request,\n> > and then acknowledges back to the backend who requested the state change.\n> >\n> > Note that this commit also enables ALTER SYSTEM READ WRITE support and make WAL\n> > prohibited state persistent across the system restarts.\n>\n> The split between the previous commit and this commit seems more\n> confusing than useful to me.\n\nBy looking at the previous two review comments I agree with you. My\nintention to make things easier for the reviewer. Will merge this patch\nwith the previous one.\n\n>\n> > +/*\n> > + * WALProhibitedRequest: Request checkpointer to make the WALProhibitState to\n> > + * read-only.\n> > + */\n> > +void\n> > +WALProhibitRequest(void)\n> > +{\n> > + /* Must not be called from checkpointer */\n> > + Assert(!AmCheckpointerProcess());\n> > + Assert(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> > +\n> > + /*\n> > + * If in a standalone backend, just do it ourselves.\n> > + */\n> > + if (!IsPostmasterEnvironment)\n> > + {\n> > + performWALProhibitStateChange(GetWALProhibitState());\n> > + return;\n> > + }\n> > +\n> > + if (CheckpointerShmem->checkpointer_pid == 0)\n> > + elog(ERROR, \"checkpointer is not running\");\n> > +\n> > + if (kill(CheckpointerShmem->checkpointer_pid, SIGINT) != 0)\n> > + elog(ERROR, \"could not signal checkpointer: %m\");\n> > +\n> > + /* Wait for the state to change to read-only */\n> > + ConditionVariablePrepareToSleep(&CheckpointerShmem->readonly_cv);\n> > + for (;;)\n> > + {\n> > + /* We'll be done once in-progress flag bit is cleared */\n> > + if (!(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > + break;\n> > +\n> > + elog(DEBUG1, \"WALProhibitRequest: Waiting for checkpointer\");\n> > + ConditionVariableSleep(&CheckpointerShmem->readonly_cv,\n> > + WAIT_EVENT_SYSTEM_WALPROHIBIT_STATE_CHANGE);\n> > + }\n> > + ConditionVariableCancelSleep();\n> > + elog(DEBUG1, \"Done WALProhibitRequest\");\n> > +}\n>\n> Isn't it possible that the system could have been changed back to be\n> read-write by the time the wakeup is being processed?\n\nYou have a point, the second backend will see the ASRW executed successfully\ndespite any changes by this. I think it better to have an error for the second\nbackend instead of silent. Will do the same.\n\n>\n> > From 0b7426fc4708cc0e4ad333da3b35e473658bba28 Mon Sep 17 00:00:00 2001\n> > From: Amul Sul <amul.sul@enterprisedb.com>\n> > Date: Tue, 14 Jul 2020 02:10:55 -0400\n> > Subject: [PATCH v3 5/6] Error or Assert before START_CRIT_SECTION for WAL\n> > write\n>\n> Isn't that the wrong order? This needs to come before the feature is\n> enabled, no?\n>\n\nAgreed but, IMHO, let it be, my intention behind the split is to make code read\neasy and I don't think they are going to be check-in separately except 0001.\n\n>\n>\n> > @@ -758,6 +759,9 @@ brinbuildempty(Relation index)\n> > ReadBufferExtended(index, INIT_FORKNUM, P_NEW, RBM_NORMAL, NULL);\n> > LockBuffer(metabuf, BUFFER_LOCK_EXCLUSIVE);\n> >\n> > + /* Building indexes will have an XID */\n> > + AssertWALPermitted_HaveXID();\n> > +\n>\n> Ugh, that's a pretty ugly naming scheme mix.\n>\n\nOk.\n\n>\n>\n>\n> > @@ -176,6 +177,10 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange,\n> > if (((BrinPageFlags(oldpage) & BRIN_EVACUATE_PAGE) == 0) &&\n> > brin_can_do_samepage_update(oldbuf, origsz, newsz))\n> > {\n> > + /* Can reach here from VACUUM, so need not have an XID */\n> > + if (RelationNeedsWAL(idxrel))\n> > + CheckWALPermitted();\n> > +\n>\n> Hm. Maybe I am confused, but why is that dependent on\n> RelationNeedsWAL()? Shouldn't readonly actually mean readonly, even if\n> no WAL is emitted?\n>\n\nTo avoid the unnecessary error for the case where the wal record will not be\ngenerated.\n\n>\n> > #include \"access/genam.h\"\n> > #include \"access/gist_private.h\"\n> > #include \"access/transam.h\"\n> > +#include \"access/walprohibit.h\"\n> > #include \"commands/vacuum.h\"\n> > #include \"lib/integerset.h\"\n> > #include \"miscadmin.h\"\n>\n> The number of places that now need this new header - pretty much the\n> same set of files that do XLogInsert, already requiring an xlog* header\n> to be included - drives me further towards the conclusion that it's not\n> a good idea to have it separate.\n>\n\nNoted.\n\n>\n> > extern void ProcessInterrupts(void);\n> >\n> > +#ifdef USE_ASSERT_CHECKING\n> > +typedef enum\n> > +{\n> > + WALPERMIT_UNCHECKED,\n> > + WALPERMIT_CHECKED,\n> > + WALPERMIT_CHECKED_AND_USED\n> > +} WALPermitCheckState;\n> > +\n> > +/* in access/walprohibit.c */\n> > +extern WALPermitCheckState walpermit_checked_state;\n> > +\n> > +/*\n> > + * Reset walpermit_checked flag when no longer in the critical section.\n> > + * Otherwise, marked checked and used.\n> > + */\n> > +#define RESET_WALPERMIT_CHECKED_STATE() \\\n> > +do { \\\n> > + walpermit_checked_state = CritSectionCount ? \\\n> > + WALPERMIT_CHECKED_AND_USED : WALPERMIT_UNCHECKED; \\\n> > +} while(0)\n> > +#else\n> > +#define RESET_WALPERMIT_CHECKED_STATE() ((void) 0)\n> > +#endif\n> > +\n>\n> Why are these in headers? And why is this tied to CritSectionCount?\n>\n\nIf it is too bad we could think to move that. In the critical section, we don't\nwant the walpermit_checked_state flag to be reset by XLogResetInsertion()\notherwise following XLogBeginInsert() will have an assertion. The idea is that\nanything that checks the flag changes it from UNCHECKED to CHECKED.\nXLogResetInsertion() sets it to CHECKED_AND_USED if in a critical section and to\nUNCHECKED otherwise (i.e. when CritSectionCount == 0).\n\nRegards,\nAmul\n\n1] https://postgr.es/m/763A0689-F189-459E-946F-F0EC4458980B@hotmail.com\n\n\n",
"msg_date": "Fri, 24 Jul 2020 18:39:14 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jul 22, 2020 at 6:03 PM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> So if we are not going to address those cases, we should change the\n> syntax and remove the notion of read-only. It could be:\n>\n> ALTER SYSTEM SET wal_writes TO off|on;\n> or\n> ALTER SYSTEM SET prohibit_wal TO off|on;\n\nThis doesn't really work because of the considerations mentioned in\nhttp://postgr.es/m/CA+TgmoakCtzOZr0XEqaLFiMBcjE2rGcBAzf4EybpXjtNetpSVw@mail.gmail.com\n\n> If we are going to try to make it truly read-only, and cater to the\n> other use cases, we have to:\n>\n> Perform a checkpoint before declaring the system read-only (i.e. before\n> the command returns). This may be expensive of course, as Andres has\n> pointed out in this thread, but it is a price that has to be paid. If we\n> do this checkpoint, then we can avoid an additional shutdown checkpoint\n> and an end-of-recovery checkpoint (if we restart the primary after a\n> crash while in read-only mode). Also, we would have to prevent any\n> operation that touches control files, which I am not sure we do today in\n> the current patch.\n\nIt's basically impossible to create a system for fast failover that\ninvolves a checkpoint. See my comments at\nhttp://postgr.es/m/CA+TgmoYe8uCgtYFGfnv3vWpZTygsdkSu2F4MNiqhkar_UKbWfQ@mail.gmail.com\n- you can't achieve five nines or even four nines of availability if\nyou have to wait for a checkpoint that might take twenty minutes. I\nhave nothing against a feature that does what you're describing, but\nthis feature is designed to make fast failover easier to accomplish,\nand it's not going to succeed if it involves a checkpoint.\n\n> Why not have the best of both worlds? Consider:\n>\n> ALTER SYSTEM SET read_only to {off, on, wal};\n>\n> -- on: wal writes off + no writes to disk\n> -- off: default\n> -- wal: only wal writes off\n>\n> Of course, there can probably be better syntax for the above.\n\nThere are a few things you can can imagine doing here:\n\n1. Freeze WAL writes but allow dirty buffers to be flushed afterward.\nThis is the most useful thing for fast failover, I would argue,\nbecause it's quick and the fact that some dirty buffers may not be\nwritten doesn't matter.\n\n2. Freeze WAL writes except a final checkpoint which will flush dirty\nbuffers along the way. This is like shutting the system down cleanly\nand bringing it back up as a standby, except without performing a\nshutdown.\n\n3. Freeze WAL writes and write out all dirty buffers without actually\ncheckpointing. This is sort of a hybrid of #1 and #2. It's probably\nnot much faster than #2 but it avoids generating any more WAL.\n\n4. Freeze WAL writes and just keep all the dirty buffers cached,\nwithout writing them out. This seems like a bad idea for the reasons\nmentioned in Amul's reply. The system might not be able to respond\neven to read-only queries any more if shared_buffers is full of\nunevictable dirty buffers.\n\nEither #2 or #3 is sufficient to take a filesystem level snapshot of\nthe cluster while it's running, but I'm not sure why that's\ninteresting. You can already do that sort of thing by using\npg_basebackup or by running pg_start_backup() and pg_stop_backup() and\ncopying the directory in the middle, and you can do all of that while\nthe cluster is accepting writes, which seems like it will usually be\nmore convenient. If you do want this, you have several options, like\nrunning a checkpoint immediately followed by ALTER SYSTEM READ ONLY\n(so that the amount of WAL generated during the backup is small but\nmaybe not none); or shutting down the system cleanly and restarting it\nas a standby; or maybe using the proposed pg_ctl demote feature\nmentioned on a separate thread.\n\nContrary to what you write, I don't think either #2 or #3 is\nsufficient to enable checksums, at least not without some more\nengineering, because the server would cache the state from the control\nfile, and a bunch of blocks from the database. I guess it would work\nif you did a server restart afterward, but I think there are better\nways of supporting online checksum enabling that don't require\nshutting down the server, or even making it read-only; and there's\nbeen significant work done on those already.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 24 Jul 2020 10:31:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 23, 2020 at 12:11 PM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> In the read-only level I was suggesting, I wasn't suggesting that we\n> stop WAL flushes, in fact we should flush the WAL before we mark the\n> system as read-only. Once the system declares itself as read-only, it\n> will not perform any more on-disk changes; It may perform all the\n> flushes it needs as a part of the read-only request handling.\n\nI think that's already how the patch works, or at least how it should\nwork. You stop new writes, flush any existing WAL, and then declare\nthe system read-only. That can all be done quickly.\n\n> What I am saying is it doesn't have to be just the queries. I think we\n> can cater to all the other use cases simply by forcing a checkpoint\n> before marking the system as read-only.\n\nBut that part can't, which means that if we did that, it would break\nthe feature for the originally intended use case. I'm not on board\nwith that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 24 Jul 2020 10:33:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 23, 2020 at 10:04 PM Andres Freund <andres@anarazel.de> wrote:\n> I think we should consider introducing XACTFATAL or such, guaranteeing\n> the transaction gets aborted, without requiring a FATAL. This has been\n> needed for enough cases that it's worthwhile.\n\nSeems like that would need a separate discussion, apart from this thread.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 24 Jul 2020 10:35:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 23, 2020 at 10:14 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Fri, Jul 24, 2020 at 6:28 AM Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:\n> > In case it is necessary, the patch set does not wait for the checkpoint to\n> > complete before marking the system as read-write. Refer:\n> >\n> > /* Set final state by clearing in-progress flag bit */\n> > if (SetWALProhibitState(wal_state &\n> ~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))\n> > {\n> > if ((wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0)\n> > ereport(LOG, (errmsg(\"system is now read only\")));\n> > else\n> > {\n> > /* Request checkpoint */\n> > RequestCheckpoint(CHECKPOINT_IMMEDIATE);\n> > ereport(LOG, (errmsg(\"system is now read write\")));\n> > }\n> > }\n> >\n> > We should RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT) before\n> > we SetWALProhibitState() and do the ereport(), if we have a read-write\n> > state change request.\n> >\n> +1, I too have the same question.\n>\n>\n>\n> FWIW, I don't we can request CHECKPOINT_WAIT for this place, otherwise, it\n> think\n> it will be deadlock case -- checkpointer process waiting for itself.\n\nWe should really just call CreateCheckPoint() here instead of\nRequestCheckpoint().\n\n> > 3. Some of the functions that were added such as GetWALProhibitState(),\n> > IsWALProhibited() etc could be declared static inline.\n> >\n> IsWALProhibited() can be static but not GetWALProhibitState() since it\n> needed to\n> be accessible from other files.\n\nIf you place a static inline function in a header file, it will be\naccessible from other files. E.g. pg_atomic_* functions.\n\nRegards,\nSoumyadeep\n\n\n",
"msg_date": "Fri, 24 Jul 2020 10:10:19 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jul 24, 2020 at 7:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 22, 2020 at 6:03 PM Soumyadeep Chakraborty\n> <soumyadeep2007@gmail.com> wrote:\n> > So if we are not going to address those cases, we should change the\n> > syntax and remove the notion of read-only. It could be:\n> >\n> > ALTER SYSTEM SET wal_writes TO off|on;\n> > or\n> > ALTER SYSTEM SET prohibit_wal TO off|on;\n>\n> This doesn't really work because of the considerations mentioned in\n> http://postgr.es/m/CA+TgmoakCtzOZr0XEqaLFiMBcjE2rGcBAzf4EybpXjtNetpSVw@mail.gmail.com\n\nAh yes. We should then have ALTER SYSTEM WAL {PERMIT|PROHIBIT}. I don't\nthink we should say \"READ ONLY\" if we still allow on-disk file changes\nafter the ALTER SYSTEM command returns (courtesy dirty buffer flushes)\nbecause it does introduce confusion, especially to an audience not privy\nto this thread. When people hear \"read-only\" they may think of static on-disk\nfiles immediately.\n\n> Contrary to what you write, I don't think either #2 or #3 is\n> sufficient to enable checksums, at least not without some more\n> engineering, because the server would cache the state from the control\n> file, and a bunch of blocks from the database. I guess it would work\n> if you did a server restart afterward, but I think there are better\n> ways of supporting online checksum enabling that don't require\n> shutting down the server, or even making it read-only; and there's\n> been significant work done on those already.\n\nAgreed. As you mentioned, if we did do #2 or #3, we would be able to do\npg_checksums on a server that was shut down or that had crashed while it\nwas in a read-only state, which is what Michael was asking for in [1]. I\nthink it's just cleaner if we allow for this.\n\nI don't have enough context to enumerate use cases for the advantages or\nopportunities that would come with an assurance that the cluster's files\nare frozen (and not covered by any existing utilities), but surely there\nare some? Like the possibility of pg_upgrade on a running server while\nit can entertain read-only queries? Surely, that's a nice one!\n\nOf course, some or all of these utilities would need to be taught about\nread-only mode.\n\nRegards,\nSoumyadeep\n\n[1] http://postgr.es/m/20200626095921.GF1504@paquier.xyz\n\n\n",
"msg_date": "Fri, 24 Jul 2020 12:11:43 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jul 24, 2020 at 7:34 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> On Thu, Jul 23, 2020 at 12:11 PM Soumyadeep Chakraborty\n> <soumyadeep2007@gmail.com> wrote:\n> > In the read-only level I was suggesting, I wasn't suggesting that we\n> > stop WAL flushes, in fact we should flush the WAL before we mark the\n> > system as read-only. Once the system declares itself as read-only, it\n> > will not perform any more on-disk changes; It may perform all the\n> > flushes it needs as a part of the read-only request handling.\n>\n> I think that's already how the patch works, or at least how it should\n> work. You stop new writes, flush any existing WAL, and then declare\n> the system read-only. That can all be done quickly.\n>\n\nTrue, except for the fact that it allows dirty buffers to be flushed\nafter the ALTER command returns.\n\n> > What I am saying is it doesn't have to be just the queries. I think we\n> > can cater to all the other use cases simply by forcing a checkpoint\n> > before marking the system as read-only.\n>\n> But that part can't, which means that if we did that, it would break\n> the feature for the originally intended use case. I'm not on board\n> with that.\n>\n\nReferring to the options you presented in [1]:\nI am saying that we should allow for both: with a checkpoint (#2) (can\nalso be #3) and without a checkpoint (#1) before having the ALTER\ncommand return, by having different levels of read-onlyness.\n\nWe should have syntax variants for these. The syntax should not be an\nALTER SYSTEM SET as you have pointed out before. Perhaps:\n\nALTER SYSTEM READ ONLY; -- #2 or #3\nALTER SYSTEM READ ONLY WAL; -- #1\nALTER SYSTEM READ WRITE;\n\nor even:\n\nALTER SYSTEM FREEZE; -- #2 or #3\nALTER SYSTEM FREEZE WAL; -- #1\nALTER SYSTEM UNFREEZE;\n\nRegards,\nSoumyadeep (VMware)\n\n[1] http://postgr.es/m/CA+TgmoZ-c3Dz9QwHwmm4bc36N4u0XZ2OyENewMf+BwokbYdK9Q@mail.gmail.com\n\n\n",
"msg_date": "Fri, 24 Jul 2020 12:11:47 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nThe attached version is updated w.r.t. some of the review comments\nfrom Soumyadeep and Andres.\n\nTwo thing from Andres' review comment are not addressed are:\n\n1. Only superuser allowed to execute AlterSystemSetWALProhibitState(). As\nper\nAndres instead we should do this in a GRANTable manner. I tried that but\ngot a little confused with the roles that we could use for ASRO and didn't\nsee\nany much appropriate one. pg_signal_backend could have been suited for ASRO\nwhere we terminate some of the backends but a user granted this role is not\nsupposed to terminate the superuser backend. If we used that we need to\ncheck a\nsuperuser backend and raise an error or warning. Other roles are\npg_write_server_files or pg_execute_server_program but I am not sure we\nshould\nuse either of this, seems a bit confusing to me. Any suggestion or am I\nmissing\nsomething here?\n\n2. About walprohibit.c/.h file, Andres' concern on file name is that WAL\nrelated file names are started with xlog. I think renaming to xlog* will\nnot be\nthe correct and will be more confusing since function/variable/macros inside\nwalprohibit.c/.h files contain the walprohibit keyword. And another\nconcern is due to\nseparate file we have to include it to many places but I think that will be\none time pain and worth it to keep code modularised.\n\nAndres, Robert, do let me know your opinion on this if you think we should\nmerge\nwalprohibit.c/.h file into xlog.c/.h, will do that in the next version.\n\n\nChanges in the attached version are:\n\n1. Renamed readonly_cv to walprohibit_cv.\n2. Removed repetitive comments for CheckWALPermitted() &\nAssertWALPermitted_HaveXID().\n3. Renamed AssertWALPermitted_HaveXID() to AssertWALPermittedHaveXID().\n4. Changes to avoid repeated RelationNeedsWAL() calls.\n5. IsWALProhibited() made static inline function.\n6. Added outfuncs and readfuncs functions.\n7. Added error when read-only state transition is in progress and other\nbackends\ntrying to make the system read-write or vice versa. Previously 2nd backend\nseeing\ncommand that was executed successfully but it wasn't.\n8. Merged checkpointer code changes patch to 0002.\n\nRegards,\nAmul",
"msg_date": "Wed, 29 Jul 2020 16:05:00 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jul 24, 2020 at 10:40 PM Soumyadeep Chakraborty <\nsoumyadeep2007@gmail.com> wrote:\n\n> On Thu, Jul 23, 2020 at 10:14 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Fri, Jul 24, 2020 at 6:28 AM Soumyadeep Chakraborty <\n> soumyadeep2007@gmail.com> wrote:\n> > > In case it is necessary, the patch set does not wait for the\n> checkpoint to\n> > > complete before marking the system as read-write. Refer:\n> > >\n> > > /* Set final state by clearing in-progress flag bit */\n> > > if (SetWALProhibitState(wal_state &\n> > ~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))\n> > > {\n> > > if ((wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0)\n> > > ereport(LOG, (errmsg(\"system is now read only\")));\n> > > else\n> > > {\n> > > /* Request checkpoint */\n> > > RequestCheckpoint(CHECKPOINT_IMMEDIATE);\n> > > ereport(LOG, (errmsg(\"system is now read write\")));\n> > > }\n> > > }\n> > >\n> > > We should RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT)\n> before\n> > > we SetWALProhibitState() and do the ereport(), if we have a read-write\n> > > state change request.\n> > >\n> > +1, I too have the same question.\n> >\n> >\n> >\n> > FWIW, I don't we can request CHECKPOINT_WAIT for this place, otherwise,\n> it\n> > think\n> > it will be deadlock case -- checkpointer process waiting for itself.\n>\n> We should really just call CreateCheckPoint() here instead of\n> RequestCheckpoint().\n>\n>\nThe only setting flag would have been enough for now, the next loop of\nCheckpointerMain() will anyway be going to call CreateCheckPoint() without\nwaiting. I used RequestCheckpoint() to avoid duplicate flag setting code.\nAlso, I think RequestCheckpoint() will be better so that we don't need to\ndeal\nwill the standalone backend, the only imperfection is it will unnecessary\nsignal\nitself, that would be fine I guess.\n\n> > 3. Some of the functions that were added such as GetWALProhibitState(),\n> > > IsWALProhibited() etc could be declared static inline.\n> > >\n> > IsWALProhibited() can be static but not GetWALProhibitState() since it\n> > needed to\n> > be accessible from other files.\n>\n> If you place a static inline function in a header file, it will be\n> accessible from other files. E.g. pg_atomic_* functions.\n>\n\nWell, the current patch set also has few inline functions in the header\nfile.\nBut, I don't think we can do the same for GetWALProhibitState() without\nchanging\nthe XLogCtl structure scope which is local to xlog.c file and the changing\nXLogCtl\nscope would be a bad idea.\n\nRegards,\nAmul\n\nOn Fri, Jul 24, 2020 at 10:40 PM Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:On Thu, Jul 23, 2020 at 10:14 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Fri, Jul 24, 2020 at 6:28 AM Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:\n> > In case it is necessary, the patch set does not wait for the checkpoint to\n> > complete before marking the system as read-write. Refer:\n> >\n> > /* Set final state by clearing in-progress flag bit */\n> > if (SetWALProhibitState(wal_state &\n> ~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))\n> > {\n> > if ((wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0)\n> > ereport(LOG, (errmsg(\"system is now read only\")));\n> > else\n> > {\n> > /* Request checkpoint */\n> > RequestCheckpoint(CHECKPOINT_IMMEDIATE);\n> > ereport(LOG, (errmsg(\"system is now read write\")));\n> > }\n> > }\n> >\n> > We should RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT) before\n> > we SetWALProhibitState() and do the ereport(), if we have a read-write\n> > state change request.\n> >\n> +1, I too have the same question.\n>\n>\n>\n> FWIW, I don't we can request CHECKPOINT_WAIT for this place, otherwise, it\n> think\n> it will be deadlock case -- checkpointer process waiting for itself.\n\nWe should really just call CreateCheckPoint() here instead of\nRequestCheckpoint().\nThe only setting flag would have been enough for now, the next loop ofCheckpointerMain() will anyway be going to call CreateCheckPoint() withoutwaiting. I used RequestCheckpoint() to avoid duplicate flag setting code.Also, I think RequestCheckpoint() will be better so that we don't need to dealwill the standalone backend, the only imperfection is it will unnecessary signalitself, that would be fine I guess.\n> > 3. Some of the functions that were added such as GetWALProhibitState(),\n> > IsWALProhibited() etc could be declared static inline.\n> >\n> IsWALProhibited() can be static but not GetWALProhibitState() since it\n> needed to\n> be accessible from other files.\n\nIf you place a static inline function in a header file, it will be\naccessible from other files. E.g. pg_atomic_* functions.Well, the current patch set also has few inline functions in the header file. But, I don't think we can do the same for GetWALProhibitState() without changingthe XLogCtl structure scope which is local to xlog.c file and the changing XLogCtlscope would be a bad idea.Regards,Amul",
"msg_date": "Wed, 29 Jul 2020 16:35:08 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jul 24, 2020 at 3:12 PM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> Ah yes. We should then have ALTER SYSTEM WAL {PERMIT|PROHIBIT}. I don't\n> think we should say \"READ ONLY\" if we still allow on-disk file changes\n> after the ALTER SYSTEM command returns (courtesy dirty buffer flushes)\n> because it does introduce confusion, especially to an audience not privy\n> to this thread. When people hear \"read-only\" they may think of static on-disk\n> files immediately.\n\nThey might think of a variety of things that are not a correct\ninterpretation of what the feature does, but I think the way to handle\nthat is to document it properly. I don't think making WAL a grammar\nkeyword just for this is a good idea. I'm not totally stuck on this\nparticular syntax if there's consensus on something else, but I\nseriously doubt that there will be consensus around adding parser\nkeywords for this.\n\n> I don't have enough context to enumerate use cases for the advantages or\n> opportunities that would come with an assurance that the cluster's files\n> are frozen (and not covered by any existing utilities), but surely there\n> are some? Like the possibility of pg_upgrade on a running server while\n> it can entertain read-only queries? Surely, that's a nice one!\n\nI think that this feature is plenty complicated enough already, and we\nshouldn't make it more complicated to cater to additional use cases,\nespecially when those use cases are somewhat uncertain and would\nprobably require additional work in other parts of the system.\n\nFor instance, I think it would be great to have an option to start the\npostmaster in a strictly \"don't write ANYTHING\" mode where regardless\nof the cluster state it won't write any data files or any WAL or even\nthe control file. It would be useful for poking around on damaged\nclusters without making things worse. And it's somewhat related to the\ntopic of this thread, but it's not THAT closely related. It's better\nto add features one at a time; you can always add more later, but if\nyou make the individual ones too big and hard they don't get done.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jul 2020 12:09:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is a rebased on top of the latest master head (# 3e98c0bafb2).\n\nRegards,\nAmul",
"msg_date": "Wed, 19 Aug 2020 15:58:15 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 6:28 AM Amul Sul <sulamul@gmail.com> wrote:\n> Attached is a rebased on top of the latest master head (# 3e98c0bafb2).\n\nDoes anyone, especially anyone named Andres Freund, have comments on\n0001? That work is somewhat independent of the rest of this patch set\nfrom a theoretical point of view, and it seems like if nobody sees a\nproblem with the line of attack there, it would make sense to go ahead\nand commit that part. Considering that this global barrier stuff is\nnew and that I'm not sure how well we really understand the problems\nyet, there's a possibility that we might end up revising these details\nagain. I understand that most people, including me, are somewhat\nreluctant to see experimental code get committed, in this case that\nship has basically sailed already, since neither of the patches that\nwe thought would use the barrier mechanism end up making it into v13.\nI don't think it's really making things any worse to try to improve\nthe mechanism.\n\n0002 isn't separately committable, but I don't see anything wrong with it.\n\nRegarding 0003:\n\nI don't understand why ProcessBarrierWALProhibit() can safely assert\nthat the WALPROHIBIT_STATE_READ_ONLY is set.\n\n+ errhint(\"Cannot continue a\ntransaction if it has performed writes while system is read only.\")));\n\nThis sentence is bad because it makes it sound like the current\ntransaction successfully performed a write after the system had\nalready become read-only. I think something like errdetail(\"Sessions\nwith open write transactions must be terminated.\") would be better.\n\nI think SetWALProhibitState() could be in walprohibit.c rather than\nxlog.c. Also, this function appears to have obvious race conditions.\nIt fetches the current state, then thinks things over while holding no\nlock, and then unconditionally updates the current state. What happens\nif somebody else has changed the state in the meantime? I had sort of\nimagined that we'd use something like pg_atomic_uint32 for this and\nmanipulate it using compare-and-swap operations. Using some kind of\nlock is probably fine, too, but you have to hold it long enough that\nthe variable can't change under you while you're still deciding\nwhether it's OK to modify it, or else recheck after reacquiring the\nlock that the value doesn't differ from what you expect.\n\nI think the choice to use info_lck to synchronize\nSharedWALProhibitState is very strange -- what is the justification\nfor that? I thought the idea might be that we frequently need to check\nSharedWALProhibitState at times when we'd be holding info_lck anyway,\nbut it looks to me like you always do separate acquisitions of\ninfo_lck just for this, in which case I don't see why we should use it\nhere instead of a separate lock. For that matter, why does this need\nto be part of XLogCtlData rather than a separate shared memory area\nthat is private to walprohibit.c?\n\n- else\n+ /*\n+ * Can't perform checkpoint or xlog rotation without writing WAL.\n+ */\n+ else if (XLogInsertAllowed())\n\nNot project style.\n\n+ case WAIT_EVENT_SYSTEM_WALPROHIBIT_STATE_CHANGE:\n\nCan we drop the word SYSTEM here to make this shorter, or would that\nbreak some convention?\n\n+/*\n+ * NB: The return string should be the same as the _ShowOption() for boolean\n+ * type.\n+ */\n+ static const char *\n+ show_system_is_read_only(void)\n+{\n\nI'm not sure the comment is appropriate here, but I'm very sure the\nextra spaces before \"static\" and \"show\" are not per style.\n\n+ /* We'll be done once in-progress flag bit is cleared */\n\nAnother whitespace mistake.\n\n+ elog(DEBUG1, \"WALProhibitRequest: Waiting for checkpointer\");\n+ elog(DEBUG1, \"Done WALProhibitRequest\");\n\nI think these should be removed.\n\nCan WALProhibitRequest() and performWALProhibitStateChange() be moved\nto walprohibit.c, just to bring more of the code for this feature\ntogether in one place? Maybe we could also rename them to\nRequestWALProhibitChange() and CompleteWALProhibitChange()?\n\n- * think it should leave the child state in place.\n+ * think it should leave the child state in place. Note that the upper\n+ * transaction will be a force to ready-only irrespective of\nits previous\n+ * status if the server state is WAL prohibited.\n */\n- XactReadOnly = s->prevXactReadOnly;\n+ XactReadOnly = s->prevXactReadOnly || !XLogInsertAllowed();\n\nBoth instances of this pattern seem sketchy to me. You don't expect\nthat reverting the state to a previous state will instead change to a\ndifferent state that doesn't match up with what you had before. What\nis the bad thing that would happen if we did not make this change?\n\n- * Else, must check to see if we're still in recovery.\n+ * Else, must check to see if we're still in recovery\n\nSpurious change.\n\n+ /* Request checkpoint */\n+ RequestCheckpoint(CHECKPOINT_IMMEDIATE);\n+ ereport(LOG, (errmsg(\"system is now read write\")));\n\nThis does not seem right. Perhaps the intention here was that the\nsystem should perform a checkpoint when it switches to read-write\nstate after having skipped the startup checkpoint. But why would we do\nthis unconditionally in all cases where we just went to a read-write\nstate?\n\nThere's probably quite a bit more to say about 0003 but I think I'm\nrunning too low on mental energy to say more now.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 Aug 2020 15:53:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Sat, Aug 29, 2020 at 1:23 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 19, 2020 at 6:28 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Attached is a rebased on top of the latest master head (# 3e98c0bafb2).\n>\n> Does anyone, especially anyone named Andres Freund, have comments on\n> 0001? That work is somewhat independent of the rest of this patch set\n> from a theoretical point of view, and it seems like if nobody sees a\n> problem with the line of attack there, it would make sense to go ahead\n> and commit that part. Considering that this global barrier stuff is\n> new and that I'm not sure how well we really understand the problems\n> yet, there's a possibility that we might end up revising these details\n> again. I understand that most people, including me, are somewhat\n> reluctant to see experimental code get committed, in this case that\n> ship has basically sailed already, since neither of the patches that\n> we thought would use the barrier mechanism end up making it into v13.\n> I don't think it's really making things any worse to try to improve\n> the mechanism.\n>\n> 0002 isn't separately committable, but I don't see anything wrong with it.\n>\n> Regarding 0003:\n>\n> I don't understand why ProcessBarrierWALProhibit() can safely assert\n> that the WALPROHIBIT_STATE_READ_ONLY is set.\n>\n\nIF blocks entered to kill a transaction have valid XID & this happens only in\ncase of system state changing to READ_ONLY.\n\n> + errhint(\"Cannot continue a\n> transaction if it has performed writes while system is read only.\")));\n>\n> This sentence is bad because it makes it sound like the current\n> transaction successfully performed a write after the system had\n> already become read-only. I think something like errdetail(\"Sessions\n> with open write transactions must be terminated.\") would be better.\n>\n\nOk, changed as suggested in the attached version.\n\n> I think SetWALProhibitState() could be in walprohibit.c rather than\n> xlog.c. Also, this function appears to have obvious race conditions.\n> It fetches the current state, then thinks things over while holding no\n> lock, and then unconditionally updates the current state. What happens\n> if somebody else has changed the state in the meantime? I had sort of\n> imagined that we'd use something like pg_atomic_uint32 for this and\n> manipulate it using compare-and-swap operations. Using some kind of\n> lock is probably fine, too, but you have to hold it long enough that\n> the variable can't change under you while you're still deciding\n> whether it's OK to modify it, or else recheck after reacquiring the\n> lock that the value doesn't differ from what you expect.\n>\n> I think the choice to use info_lck to synchronize\n> SharedWALProhibitState is very strange -- what is the justification\n> for that? I thought the idea might be that we frequently need to check\n> SharedWALProhibitState at times when we'd be holding info_lck anyway,\n> but it looks to me like you always do separate acquisitions of\n> info_lck just for this, in which case I don't see why we should use it\n> here instead of a separate lock. For that matter, why does this need\n> to be part of XLogCtlData rather than a separate shared memory area\n> that is private to walprohibit.c?\n>\n\nIn the attached patch I added a separate shared memory structure for WAL\nprohibit state. SharedWALProhibitState is now pg_atomic_uint32 and part of that\nstructure instead of XLogCtlData. The shared state will be changed using a\ncompare-and-swap operation.\n\nI hope that should be enough to avoid said race conditions.\n\n> - else\n> + /*\n> + * Can't perform checkpoint or xlog rotation without writing WAL.\n> + */\n> + else if (XLogInsertAllowed())\n>\n> Not project style.\n>\n\nCorrected.\n\n> + case WAIT_EVENT_SYSTEM_WALPROHIBIT_STATE_CHANGE:\n>\n> Can we drop the word SYSTEM here to make this shorter, or would that\n> break some convention?\n>\n\nNo issue, removed SYSTEM.\n\n> +/*\n> + * NB: The return string should be the same as the _ShowOption() for boolean\n> + * type.\n> + */\n> + static const char *\n> + show_system_is_read_only(void)\n> +{\n>\n\nFixed.\n\n> I'm not sure the comment is appropriate here, but I'm very sure the\n> extra spaces before \"static\" and \"show\" are not per style.\n>\n> + /* We'll be done once in-progress flag bit is cleared */\n>\n> Another whitespace mistake.\n>\n\nFixed.\n\n> + elog(DEBUG1, \"WALProhibitRequest: Waiting for checkpointer\");\n> + elog(DEBUG1, \"Done WALProhibitRequest\");\n>\n> I think these should be removed.\n>\n\nRemoved.\n\n> Can WALProhibitRequest() and performWALProhibitStateChange() be moved\n> to walprohibit.c, just to bring more of the code for this feature\n> together in one place? Maybe we could also rename them to\n> RequestWALProhibitChange() and CompleteWALProhibitChange()?\n>\n\nYes, I have moved these functions to walprohibit.c and renamed as suggested.\nFor this, I needed to add few helper functions to send a signal to checkpointer\nand update Control File, as send_signal_to_checkpointer &\nSetControlFileWALProhibitFlag() respectively, since checkpointer_pid\nor ControlFile are not directly accessible from walprohibit.c\n\n> - * think it should leave the child state in place.\n> + * think it should leave the child state in place. Note that the upper\n> + * transaction will be a force to ready-only irrespective of\n> its previous\n> + * status if the server state is WAL prohibited.\n> */\n> - XactReadOnly = s->prevXactReadOnly;\n> + XactReadOnly = s->prevXactReadOnly || !XLogInsertAllowed();\n>\n> Both instances of this pattern seem sketchy to me. You don't expect\n> that reverting the state to a previous state will instead change to a\n> different state that doesn't match up with what you had before. What\n> is the bad thing that would happen if we did not make this change?\n>\n\nWe can drop these changes now since we are simply terminating sessions for those\nwho have performed or expected to perform write operations.\n\n> - * Else, must check to see if we're still in recovery.\n> + * Else, must check to see if we're still in recovery\n>\n> Spurious change.\n>\n\nFixed.\n\n> + /* Request checkpoint */\n> + RequestCheckpoint(CHECKPOINT_IMMEDIATE);\n> + ereport(LOG, (errmsg(\"system is now read write\")));\n>\n> This does not seem right. Perhaps the intention here was that the\n> system should perform a checkpoint when it switches to read-write\n> state after having skipped the startup checkpoint. But why would we do\n> this unconditionally in all cases where we just went to a read-write\n> state?\n>\n\nYou are correct since this could be expensive if the system changes to read-only\nfor a shorter period. For the initial version, I did this unconditionally to\navoid additional shared-memory variables in XLogCtlData but now WAL prohibits\nstate got its own shared-memory structure so that I have added the required\nvariable to it. Now, doing this checkpoint conditionally with\n CHECKPOINT_END_OF_RECOVERY & CHECKPOINT_IMMEDIATE flag what we do in the\nstartup process. Note that to mark end-of-recovery checkpoint has been skipped\nfrom the startup process I have added helper function as\nMarkCheckPointSkippedInWalProhibitState(), I am not sure the name that I have\nchosen is the best fit.\n\n> There's probably quite a bit more to say about 0003 but I think I'm\n> running too low on mental energy to say more now.\n>\n\nThanks for your time and suggestions.\n\nRegards,\nAmul",
"msg_date": "Tue, 1 Sep 2020 16:43:10 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-28 15:53:29 -0400, Robert Haas wrote:\n> On Wed, Aug 19, 2020 at 6:28 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Attached is a rebased on top of the latest master head (# 3e98c0bafb2).\n>\n> Does anyone, especially anyone named Andres Freund, have comments on\n> 0001? That work is somewhat independent of the rest of this patch set\n> from a theoretical point of view, and it seems like if nobody sees a\n> problem with the line of attack there, it would make sense to go ahead\n> and commit that part.\n\nIt'd be easier to review the proposed commit if it included reasoning\nabout the change...\n\nIn particular, it looks to me like the commit actually implements two\ndifferent changes:\n1) Allow a barrier function to \"reject\" a set barrier, because it can't\n be set in that moment\n2) Allow barrier functions to raise errors\n\nand there's not much of an explanation as to why (probably somewhere\nupthread, but ...)\n\n\n\n /*\n * ProcSignalShmemSize\n@@ -486,17 +490,59 @@ ProcessProcSignalBarrier(void)\n \tflags = pg_atomic_exchange_u32(&MyProcSignalSlot->pss_barrierCheckMask, 0);\n\n \t/*\n-\t * Process each type of barrier. It's important that nothing we call from\n-\t * here throws an error, because pss_barrierCheckMask has already been\n-\t * cleared. If we jumped out of here before processing all barrier types,\n-\t * then we'd forget about the need to do so later.\n-\t *\n-\t * NB: It ought to be OK to call the barrier-processing functions\n-\t * unconditionally, but it's more efficient to call only the ones that\n-\t * might need us to do something based on the flags.\n+\t * If there are no flags set, then we can skip doing any real work.\n+\t * Otherwise, establish a PG_TRY block, so that we don't lose track of\n+\t * which types of barrier processing are needed if an ERROR occurs.\n \t */\n-\tif (BARRIER_SHOULD_CHECK(flags, PROCSIGNAL_BARRIER_PLACEHOLDER))\n-\t\tProcessBarrierPlaceholder();\n+\tif (flags != 0)\n+\t{\n+\t\tPG_TRY();\n+\t\t{\n+\t\t\t/*\n+\t\t\t * Process each type of barrier. The barrier-processing functions\n+\t\t\t * should normally return true, but may return false if the barrier\n+\t\t\t * can't be absorbed at the current time. This should be rare,\n+\t\t\t * because it's pretty expensive. Every single\n+\t\t\t * CHECK_FOR_INTERRUPTS() will return here until we manage to\n+\t\t\t * absorb the barrier, and that cost will add up in a hurry.\n+\t\t\t *\n+\t\t\t * NB: It ought to be OK to call the barrier-processing functions\n+\t\t\t * unconditionally, but it's more efficient to call only the ones\n+\t\t\t * that might need us to do something based on the flags.\n+\t\t\t */\n+\t\t\tif (BARRIER_SHOULD_CHECK(flags, PROCSIGNAL_BARRIER_PLACEHOLDER)\n+\t\t\t\t&& ProcessBarrierPlaceholder())\n+\t\t\t\tBARRIER_CLEAR_BIT(flags, PROCSIGNAL_BARRIER_PLACEHOLDER);\n\nThis pattern seems like it'll get unwieldy with more than one barrier\ntype. And won't flag \"unhandled\" barrier types either (already the case,\nI know). We could go for something like:\n\n while (flags != 0)\n {\n barrier_bit = pg_rightmost_one_pos32(flags);\n barrier_type = 1 >> barrier_bit;\n\n switch (barrier_type)\n {\n case PROCSIGNAL_BARRIER_PLACEHOLDER:\n processed = ProcessBarrierPlaceholder();\n }\n\n if (processed)\n BARRIER_CLEAR_BIT(flags, barrier_type);\n }\n\nBut perhaps that's too complicated?\n\n+\t\t}\n+\t\tPG_CATCH();\n+\t\t{\n+\t\t\t/*\n+\t\t\t * If an ERROR occurred, add any flags that weren't yet handled\n+\t\t\t * back into pss_barrierCheckMask, and reset the global variables\n+\t\t\t * so that we try again the next time we check for interrupts.\n+\t\t\t */\n+\t\t\tpg_atomic_fetch_or_u32(&MyProcSignalSlot->pss_barrierCheckMask,\n+\t\t\t\t\t\t\t\t flags);\n\nFor this to be correct, wouldn't flags need to be volatile? Otherwise\nthis might use a register value for flags, which might not contain the\ncorrect value at this point.\n\nPerhaps a comment explaining why we have to clear bits first would be\ngood?\n\n+\t\t\tProcSignalBarrierPending = true;\n+\t\t\tInterruptPending = true;\n+\n+\t\t\tPG_RE_THROW();\n+\t\t}\n+\t\tPG_END_TRY();\n\n\n+\t\t/*\n+\t\t * If some barrier was not successfully absorbed, we will have to try\n+\t\t * again later.\n+\t\t */\n+\t\tif (flags != 0)\n+\t\t{\n+\t\t\tpg_atomic_fetch_or_u32(&MyProcSignalSlot->pss_barrierCheckMask,\n+\t\t\t\t\t\t\t\t flags);\n+\t\t\tProcSignalBarrierPending = true;\n+\t\t\tInterruptPending = true;\n+\t\t\treturn;\n+\t\t}\n+\t}\n\nI wish there were a way we could combine the PG_CATCH and this instance\nof the same code. I'd probably just move into a helper.\n\n\nIt might be good to add a warning to WaitForProcSignalBarrier() or by\npss_barrierCheckMask indicating that it's *not* OK to look at\npss_barrierCheckMask when checking whether barriers have been processed.\n\n\n> Considering that this global barrier stuff is\n> new and that I'm not sure how well we really understand the problems\n> yet, there's a possibility that we might end up revising these details\n> again. I understand that most people, including me, are somewhat\n> reluctant to see experimental code get committed, in this case that\n> ship has basically sailed already, since neither of the patches that\n> we thought would use the barrier mechanism end up making it into v13.\n> I don't think it's really making things any worse to try to improve\n> the mechanism.\n\nYea, I have no problem with this.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Sep 2020 11:20:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nThomas, there's one point below that could be relevant for you. You can\nsearch for your name and/or checkpoint...\n\n\nOn 2020-09-01 16:43:10 +0530, Amul Sul wrote:\n> diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c\n> index 42050ab7195..0ac826d3c2f 100644\n> --- a/src/backend/nodes/readfuncs.c\n> +++ b/src/backend/nodes/readfuncs.c\n> @@ -2552,6 +2552,19 @@ _readAlternativeSubPlan(void)\n> \tREAD_DONE();\n> }\n> \n> +/*\n> + * _readAlterSystemWALProhibitState\n> + */\n> +static AlterSystemWALProhibitState *\n> +_readAlterSystemWALProhibitState(void)\n> +{\n> +\tREAD_LOCALS(AlterSystemWALProhibitState);\n> +\n> +\tREAD_BOOL_FIELD(WALProhibited);\n> +\n> +\tREAD_DONE();\n> +}\n> +\n\nWhy do we need readfuncs support for this?\n\n> +\n> +/*\n> + * AlterSystemSetWALProhibitState\n> + *\n> + * Execute ALTER SYSTEM READ { ONLY | WRITE } statement.\n> + */\n> +static void\n> +AlterSystemSetWALProhibitState(AlterSystemWALProhibitState *stmt)\n> +{\n> +\t/* some code */\n> +\telog(INFO, \"AlterSystemSetWALProhibitState() called\");\n> +}\n\nAs long as it's not implemented it seems better to return an ERROR.\n\n> @@ -3195,6 +3195,16 @@ typedef struct AlterSystemStmt\n> \tVariableSetStmt *setstmt;\t/* SET subcommand */\n> } AlterSystemStmt;\n> \n> +/* ----------------------\n> + *\t\tAlter System Read Statement\n> + * ----------------------\n> + */\n> +typedef struct AlterSystemWALProhibitState\n> +{\n> +\tNodeTag\t\ttype;\n> +\tbool\t\tWALProhibited;\n> +} AlterSystemWALProhibitState;\n> +\n\nAll the nearby fields use under_score_style names.\n\n\n\n> From f59329e4a7285c5b132ca74473fe88e5ba537254 Mon Sep 17 00:00:00 2001\n> From: Amul Sul <amul.sul@enterprisedb.com>\n> Date: Fri, 19 Jun 2020 06:29:36 -0400\n> Subject: [PATCH v6 3/5] Implement ALTER SYSTEM READ ONLY using global barrier.\n> \n> Implementation:\n> \n> 1. When a user tried to change server state to WAL-Prohibited using\n> ALTER SYSTEM READ ONLY command; AlterSystemSetWALProhibitState()\n> raises request to checkpointer by marking current state to inprogress in\n> shared memory. Checkpointer, noticing that the current state is has\n\n\"is has\"\n\n> WALPROHIBIT_TRANSITION_IN_PROGRESS flag set, does the barrier request, and\n> then acknowledges back to the backend who requested the state change once\n> the transition has been completed. Final state will be updated in control\n> file to make it persistent across the system restarts.\n\nWhat makes checkpointer the right backend to do this work?\n\n\n> 2. When a backend receives the WAL-Prohibited barrier, at that moment if\n> it is already in a transaction and the transaction already assigned XID,\n> then the backend will be killed by throwing FATAL(XXX: need more discussion\n> on this)\n\n\n> 3. Otherwise, if that backend running transaction which yet to get XID\n> assigned we don't need to do anything special\n\nSomewhat garbled sentence...\n\n\n> 4. A new transaction (from existing or new backend) starts as a read-only\n> transaction.\n\nMaybe \"(in an existing or in a new backend)\"?\n\n\n> 5. Autovacuum launcher as well as checkpointer will don't do anything in\n> WAL-Prohibited server state until someone wakes us up. E.g. a backend\n> might later on request us to put the system back to read-write.\n\n\"will don't do anything\", \"might later on request us\"\n\n\n> 6. At shutdown in WAL-Prohibited mode, we'll skip shutdown checkpoint\n> and xlog rotation. Starting up again will perform crash recovery(XXX:\n> need some discussion on this as well) but the end of recovery checkpoint\n> will be skipped and it will be performed when the system changed to\n> WAL-Permitted mode.\n\nHm, this has some interesting interactions with some of Thomas' recent\nhacking.\n\n\n> 8. Only super user can toggle WAL-Prohibit state.\n\nHm. I don't quite agree with this. We try to avoid if (superuser())\nstyle checks these days, because they can't be granted to other\nusers. Look at how e.g. pg_promote() - an operation of similar severity\n- is handled. We just revoke the permission from public in\nsystem_views.sql:\nREVOKE EXECUTE ON FUNCTION pg_promote(boolean, integer) FROM public;\n\n\n> 9. Add system_is_read_only GUC show the system state -- will true when system\n> is wal prohibited or in recovery.\n\n*shows the system state. There's also some oddity in the second part of\nthe sentence.\n\nIs it really correct to show system_is_read_only as true during\nrecovery? For one, recovery could end soon after, putting the system\ninto r/w mode, if it wasn't actually ALTER SYSTEM READ ONLY'd. But also,\nduring recovery the database state actually changes if there are changes\nto replay. ISTM it would not be a good idea to mix ASRO and\npg_is_in_recovery() into one GUC.\n\n\n> --- /dev/null\n> +++ b/src/backend/access/transam/walprohibit.c\n> @@ -0,0 +1,321 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * walprohibit.c\n> + * \t\tPostgreSQL write-ahead log prohibit states\n> + *\n> + *\n> + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n> + *\n> + * src/backend/access/transam/walprohibit.c\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +#include \"postgres.h\"\n> +\n> +#include \"access/walprohibit.h\"\n> +#include \"pgstat.h\"\n> +#include \"port/atomics.h\"\n> +#include \"postmaster/bgwriter.h\"\n> +#include \"storage/condition_variable.h\"\n> +#include \"storage/procsignal.h\"\n> +#include \"storage/shmem.h\"\n> +\n> +/*\n> + * Shared-memory WAL prohibit state\n> + */\n> +typedef struct WALProhibitStateData\n> +{\n> +\t/* Indicates current WAL prohibit state */\n> +\tpg_atomic_uint32 SharedWALProhibitState;\n> +\n> +\t/* Startup checkpoint pending */\n> +\tbool\t\tcheckpointPending;\n> +\n> +\t/* Signaled when requested WAL prohibit state changes */\n> +\tConditionVariable walprohibit_cv;\n\nYou're using three different naming styles for as many members.\n\n\n\n> +/*\n> + * ProcessBarrierWALProhibit()\n> + *\n> + * Handle WAL prohibit state change request.\n> + */\n> +bool\n> +ProcessBarrierWALProhibit(void)\n> +{\n> +\t/*\n> +\t * Kill off any transactions that have an XID *before* allowing the system\n> +\t * to go WAL prohibit state.\n> +\t */\n> +\tif (FullTransactionIdIsValid(GetTopFullTransactionIdIfAny()))\n\nHm. I wonder if this check is good enough. If you look at\nRecordTransactionCommit() we also WAL log in some cases where no xid was\nassigned. This is particularly true of (auto-)vacuum, but also for HOT\npruning.\n\nI think it'd be good to put the logic of this check into xlog.c and\nmirror the logic in RecordTransactionCommit(). And add cross-referencing\ncomments to RecordTransactionCommit and the new function, reminding our\nfutures selves that both places need to be modified.\n\n\n> +\t{\n> +\t\t/* Should be here only for the WAL prohibit state. */\n> +\t\tAssert(GetWALProhibitState() & WALPROHIBIT_STATE_READ_ONLY);\n\nThere are no races where an ASRO READ ONLY is quickly followed by ASRO\nREAD WRITE where this could be reached?\n\n\n> +/*\n> + * AlterSystemSetWALProhibitState()\n> + *\n> + * Execute ALTER SYSTEM READ { ONLY | WRITE } statement.\n> + */\n> +void\n> +AlterSystemSetWALProhibitState(AlterSystemWALProhibitState *stmt)\n> +{\n> +\tuint32\t\tstate;\n> +\n> +\tif (!superuser())\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> +\t\t\t\t errmsg(\"must be superuser to execute ALTER SYSTEM command\")));\n\nSee comments about this above.\n\n\n> +\t/* Alter WAL prohibit state not allowed during recovery */\n> +\tPreventCommandDuringRecovery(\"ALTER SYSTEM\");\n> +\n> +\t/* Requested state */\n> +\tstate = stmt->WALProhibited ?\n> +\t\tWALPROHIBIT_STATE_READ_ONLY : WALPROHIBIT_STATE_READ_WRITE;\n> +\n> +\t/*\n> +\t * Since we yet to convey this WAL prohibit state to all backend mark it\n> +\t * in-progress.\n> +\t */\n> +\tstate |= WALPROHIBIT_TRANSITION_IN_PROGRESS;\n> +\n> +\tif (!SetWALProhibitState(state))\n> +\t\treturn;\t\t\t\t\t/* server is already in the desired state */\n> +\n\nThis use of bitmasks seems unnecessary to me. I'd rather have one param\nfor WALPROHIBIT_STATE_READ_ONLY / WALPROHIBIT_STATE_READ_WRITE and one\nfor WALPROHIBIT_TRANSITION_IN_PROGRESS\n\n\n\n> +/*\n> + * RequestWALProhibitChange()\n> + *\n> + * Request checkpointer to make the WALProhibitState to read-only.\n> + */\n> +static void\n> +RequestWALProhibitChange(void)\n> +{\n> +\t/* Must not be called from checkpointer */\n> +\tAssert(!AmCheckpointerProcess());\n> +\tAssert(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> +\n> +\t/*\n> +\t * If in a standalone backend, just do it ourselves.\n> +\t */\n> +\tif (!IsPostmasterEnvironment)\n> +\t{\n> +\t\tCompleteWALProhibitChange(GetWALProhibitState());\n> +\t\treturn;\n> +\t}\n> +\n> +\tsend_signal_to_checkpointer(SIGINT);\n> +\n> +\t/* Wait for the state to change to read-only */\n> +\tConditionVariablePrepareToSleep(&WALProhibitState->walprohibit_cv);\n> +\tfor (;;)\n> +\t{\n> +\t\t/* We'll be done once in-progress flag bit is cleared */\n> +\t\tif (!(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> +\t\t\tbreak;\n> +\n> +\t\tConditionVariableSleep(&WALProhibitState->walprohibit_cv,\n> +\t\t\t\t\t\t\t WAIT_EVENT_WALPROHIBIT_STATE_CHANGE);\n> +\t}\n> +\tConditionVariableCancelSleep();\n\nWhat if somebody concurrently changes the state back to READ WRITE?\nWon't we unnecessarily wait here?\n\nThat's probably fine, because we would just wait until that transition\nis complete too. But at least a comment about that would be\ngood. Alternatively a \"ASRO transitions completed counter\" or such might\nbe a better idea?\n\n\n> +/*\n> + * CompleteWALProhibitChange()\n> + *\n> + * Checkpointer will call this to complete the requested WAL prohibit state\n> + * transition.\n> + */\n> +void\n> +CompleteWALProhibitChange(uint32 wal_state)\n> +{\n> +\tuint64\t\tbarrierGeneration;\n> +\n> +\t/*\n> +\t * Must be called from checkpointer. Otherwise, it must be single-user\n> +\t * backend.\n> +\t */\n> +\tAssert(AmCheckpointerProcess() || !IsPostmasterEnvironment);\n> +\tAssert(wal_state & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> +\n> +\t/*\n> +\t * WAL prohibit state change is initiated. We need to complete the state\n> +\t * transition by setting requested WAL prohibit state in all backends.\n> +\t */\n> +\telog(DEBUG1, \"waiting for backends to adopt requested WAL prohibit state\");\n> +\n> +\t/* Emit global barrier */\n> +\tbarrierGeneration = EmitProcSignalBarrier(PROCSIGNAL_BARRIER_WALPROHIBIT);\n> +\tWaitForProcSignalBarrier(barrierGeneration);\n> +\n> +\t/* And flush all writes. */\n> +\tXLogFlush(GetXLogWriteRecPtr());\n\nHm, maybe I'm missing something, but why is the write pointer the right\nthing to flush? That won't include records that haven't been written to\ndisk yet... We also need to trigger writing out all WAL that is as of\nyet unwritten, no? Without having thought a lot about it, it seems that\nGetXLogInsertRecPtr() would be the right thing to flush?\n\n\n> +\t/* Set final state by clearing in-progress flag bit */\n> +\tif (SetWALProhibitState(wal_state & ~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))\n> +\t{\n> +\t\tbool\t\twal_prohibited;\n> +\n> +\t\twal_prohibited = (wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0;\n> +\n> +\t\t/* Update the control file to make state persistent */\n> +\t\tSetControlFileWALProhibitFlag(wal_prohibited);\n\nHm. Is there an issue with not WAL logging the control file change? Is\nthere a scenario where we a crash + recovery would end up overwriting\nthis?\n\n\n> +\t\tif (wal_prohibited)\n> +\t\t\tereport(LOG, (errmsg(\"system is now read only\")));\n> +\t\telse\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * Request checkpoint if the end-of-recovery checkpoint has been\n> +\t\t\t * skipped previously.\n> +\t\t\t */\n> +\t\t\tif (WALProhibitState->checkpointPending)\n> +\t\t\t{\n> +\t\t\t\tRequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |\n> +\t\t\t\t\t\t\t\t CHECKPOINT_IMMEDIATE);\n> +\t\t\t\tWALProhibitState->checkpointPending = false;\n> +\t\t\t}\n> +\t\t\tereport(LOG, (errmsg(\"system is now read write\")));\n> +\t\t}\n> +\t}\n> +\n> +\t/* Wake up the backend who requested the state change */\n> +\tConditionVariableBroadcast(&WALProhibitState->walprohibit_cv);\n\nCould be multiple backends, right?\n\n\n> +}\n> +\n> +/*\n> + * GetWALProhibitState()\n> + *\n> + * Atomically return the current server WAL prohibited state\n> + */\n> +uint32\n> +GetWALProhibitState(void)\n> +{\n> +\treturn pg_atomic_read_u32(&WALProhibitState->SharedWALProhibitState);\n> +}\n\nIs there an issue with needing memory barriers here?\n\n\n> +/*\n> + * SetWALProhibitState()\n> + *\n> + * Change current WAL prohibit state to the input state.\n> + *\n> + * If the server is already completely moved to the requested WAL prohibit\n> + * state, or if the desired state is same as the current state, return false,\n> + * indicating that the server state did not change. Else return true.\n> + */\n> +bool\n> +SetWALProhibitState(uint32 new_state)\n> +{\n> +\tbool\t\tstate_updated = false;\n> +\tuint32\t\tcur_state;\n> +\n> +\tcur_state = GetWALProhibitState();\n> +\n> +\t/* Server is already in requested state */\n> +\tif (new_state == cur_state ||\n> +\t\tnew_state == (cur_state | WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> +\t\treturn false;\n> +\n> +\t/* Prevent concurrent contrary in progress transition state setting */\n> +\tif ((new_state & WALPROHIBIT_TRANSITION_IN_PROGRESS) &&\n> +\t\t(cur_state & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> +\t{\n> +\t\tif (cur_state & WALPROHIBIT_STATE_READ_ONLY)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> +\t\t\t\t\t errmsg(\"system state transition to read only is already in progress\"),\n> +\t\t\t\t\t errhint(\"Try after sometime again.\")));\n> +\t\telse\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> +\t\t\t\t\t errmsg(\"system state transition to read write is already in progress\"),\n> +\t\t\t\t\t errhint(\"Try after sometime again.\")));\n> +\t}\n> +\n> +\t/* Update new state in share memory */\n> +\tstate_updated =\n> +\t\tpg_atomic_compare_exchange_u32(&WALProhibitState->SharedWALProhibitState,\n> +\t\t\t\t\t\t\t\t\t &cur_state, new_state);\n> +\n> +\tif (!state_updated)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> +\t\t\t\t errmsg(\"system read write state concurrently changed\"),\n> +\t\t\t\t errhint(\"Try after sometime again.\")));\n> +\n\nI don't think it's safe to use pg_atomic_compare_exchange_u32() outside\nof a loop. I think there's platforms (basically all load-linked /\nstore-conditional architectures) where than can fail spuriously.\n\nAlso, there's no memory barrier around GetWALProhibitState, so there's\nno guarantee it's not an out-of-date value you're starting with.\n\n\n> +/\n> + * MarkCheckPointSkippedInWalProhibitState()\n> + *\n> + * Sets checkpoint pending flag so that it can be performed next time while\n> + * changing system state to WAL permitted.\n> + */\n> +void\n> +MarkCheckPointSkippedInWalProhibitState(void)\n> +{\n> +\tWALProhibitState->checkpointPending = true;\n> +}\n\nI don't *at all* like this living outside of xlog.c. I think this should\nbe moved there, and merged with deferring checkpoints in other cases\n(promotions, not immediately performing a checkpoint after recovery).\nThere's state in ControlFile *and* here for essentially the same thing.\n\n\n\n> +\t * If it is not currently possible to insert write-ahead log records,\n> +\t * either because we are still in recovery or because ALTER SYSTEM READ\n> +\t * ONLY has been executed, force this to be a read-only transaction.\n> +\t * We have lower level defences in XLogBeginInsert() and elsewhere to stop\n> +\t * us from modifying data during recovery when !XLogInsertAllowed(), but\n> +\t * this gives the normal indication to the user that the transaction is\n> +\t * read-only.\n> +\t *\n> +\t * On the other hand, we only need to set the startedInRecovery flag when\n> +\t * the transaction started during recovery, and not when WAL is otherwise\n> +\t * prohibited. This information is used by RelationGetIndexScan() to\n> +\t * decide whether to permit (1) relying on existing killed-tuple markings\n> +\t * and (2) further killing of index tuples. Even when WAL is prohibited\n> +\t * on the master, it's still the master, so the former is OK; and since\n> +\t * killing index tuples doesn't generate WAL, the latter is also OK.\n> +\t * See comments in RelationGetIndexScan() and MarkBufferDirtyHint().\n> +\t */\n> +\tXactReadOnly = DefaultXactReadOnly || !XLogInsertAllowed();\n> +\ts->startedInRecovery = RecoveryInProgress();\n\nIt's somewhat ugly that we call RecoveryInProgress() once in\nXLogInsertAllowed() and then again directly here... It's probably fine\nruntime cost wise, but...\n\n\n> /*\n> * Subroutine to try to fetch and validate a prior checkpoint record.\n> *\n> @@ -8508,9 +8564,13 @@ ShutdownXLOG(int code, Datum arg)\n> \t */\n> \tWalSndWaitStopping();\n> \n> +\t/*\n> +\t * The restartpoint, checkpoint, or xlog rotation will be performed if the\n> +\t * WAL writing is permitted.\n> +\t */\n> \tif (RecoveryInProgress())\n> \t\tCreateRestartPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n> -\telse\n> +\telse if (XLogInsertAllowed())\n\nNot sure I like going via XLogInsertAllowed(), that seems like a\nconfusing indirection here. And it encompasses things we atually don't\nwant to check for - it's fragile to also look at LocalXLogInsertAllowed\nhere imo.\n\n\n> \tShutdownCLOG();\n> \tShutdownCommitTs();\n> \tShutdownSUBTRANS();\n> diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\n> index 1b8cd7bacd4..aa4cdd57ec1 100644\n> --- a/src/backend/postmaster/autovacuum.c\n> +++ b/src/backend/postmaster/autovacuum.c\n> @@ -652,6 +652,10 @@ AutoVacLauncherMain(int argc, char *argv[])\n> \n> \t\tHandleAutoVacLauncherInterrupts();\n> \n> +\t\t/* If the server is read only just go back to sleep. */\n> +\t\tif (!XLogInsertAllowed())\n> +\t\t\tcontinue;\n> +\n\nI think we really should have a different functions for places like\nthis. We don't want to generally hide bugs like e.g. starting the\nautovac launcher in recovery, but this would.\n\n\n\n> @@ -342,6 +344,28 @@ CheckpointerMain(void)\n> \t\tAbsorbSyncRequests();\n> \t\tHandleCheckpointerInterrupts();\n> \n> +\t\twal_state = GetWALProhibitState();\n> +\n> +\t\tif (wal_state & WALPROHIBIT_TRANSITION_IN_PROGRESS)\n> +\t\t{\n> +\t\t\t/* Complete WAL prohibit state change request */\n> +\t\t\tCompleteWALProhibitChange(wal_state);\n> +\t\t\tcontinue;\n> +\t\t}\n> +\t\telse if (wal_state & WALPROHIBIT_STATE_READ_ONLY)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * Don't do anything until someone wakes us up. For example a\n> +\t\t\t * backend might later on request us to put the system back to\n> +\t\t\t * read-write wal prohibit sate.\n> +\t\t\t */\n> +\t\t\t(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, -1,\n> +\t\t\t\t\t\t\t WAIT_EVENT_CHECKPOINTER_MAIN);\n> +\t\t\tcontinue;\n> +\t\t}\n> +\t\tAssert(wal_state == WALPROHIBIT_STATE_READ_WRITE);\n> +\n> \t\t/*\n> \t\t * Detect a pending checkpoint request by checking whether the flags\n> \t\t * word in shared memory is nonzero. We shouldn't need to acquire the\n> @@ -1323,3 +1347,16 @@ FirstCallSinceLastCheckpoint(void)\n> \n> \treturn FirstCall;\n> }\n\nSo, if we're in the middle of a paced checkpoint with a large\ncheckpoint_timeout - a sensible real world configuration - we'll not\nprocess ASRO until that checkpoint is over? That seems very much not\npractical. What am I missing?\n\n\n> +/*\n> + * send_signal_to_checkpointer allows a process to send a signal to the checkpoint process.\n> + */\n> +void\n> +send_signal_to_checkpointer(int signum)\n> +{\n> +\tif (CheckpointerShmem->checkpointer_pid == 0)\n> +\t\telog(ERROR, \"checkpointer is not running\");\n> +\n> +\tif (kill(CheckpointerShmem->checkpointer_pid, signum) != 0)\n> +\t\telog(ERROR, \"could not signal checkpointer: %m\");\n> +}\n\nSudden switch to a different naming style...\n\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Sep 2020 14:03:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Sep 10, 2020 at 2:33 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n\nThanks for your time.\n\n>\n> Thomas, there's one point below that could be relevant for you. You can\n> search for your name and/or checkpoint...\n>\n>\n> On 2020-09-01 16:43:10 +0530, Amul Sul wrote:\n> > diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c\n> > index 42050ab7195..0ac826d3c2f 100644\n> > --- a/src/backend/nodes/readfuncs.c\n> > +++ b/src/backend/nodes/readfuncs.c\n> > @@ -2552,6 +2552,19 @@ _readAlternativeSubPlan(void)\n> > READ_DONE();\n> > }\n> >\n> > +/*\n> > + * _readAlterSystemWALProhibitState\n> > + */\n> > +static AlterSystemWALProhibitState *\n> > +_readAlterSystemWALProhibitState(void)\n> > +{\n> > + READ_LOCALS(AlterSystemWALProhibitState);\n> > +\n> > + READ_BOOL_FIELD(WALProhibited);\n> > +\n> > + READ_DONE();\n> > +}\n> > +\n>\n> Why do we need readfuncs support for this?\n>\n\nI thought we need that from your previous comment[1].\n\n> > +\n> > +/*\n> > + * AlterSystemSetWALProhibitState\n> > + *\n> > + * Execute ALTER SYSTEM READ { ONLY | WRITE } statement.\n> > + */\n> > +static void\n> > +AlterSystemSetWALProhibitState(AlterSystemWALProhibitState *stmt)\n> > +{\n> > + /* some code */\n> > + elog(INFO, \"AlterSystemSetWALProhibitState() called\");\n> > +}\n>\n> As long as it's not implemented it seems better to return an ERROR.\n>\n\nOk, will add an error in the next version.\n\n> > @@ -3195,6 +3195,16 @@ typedef struct AlterSystemStmt\n> > VariableSetStmt *setstmt; /* SET subcommand */\n> > } AlterSystemStmt;\n> >\n> > +/* ----------------------\n> > + * Alter System Read Statement\n> > + * ----------------------\n> > + */\n> > +typedef struct AlterSystemWALProhibitState\n> > +{\n> > + NodeTag type;\n> > + bool WALProhibited;\n> > +} AlterSystemWALProhibitState;\n> > +\n>\n> All the nearby fields use under_score_style names.\n>\n\nI am not sure which nearby fields having the underscore that you are referring\nto. Probably \"WALProhibited\" needs to be renamed to \"walprohibited\" to be\ninline with the nearby fields.\n\n>\n> > From f59329e4a7285c5b132ca74473fe88e5ba537254 Mon Sep 17 00:00:00 2001\n> > From: Amul Sul <amul.sul@enterprisedb.com>\n> > Date: Fri, 19 Jun 2020 06:29:36 -0400\n> > Subject: [PATCH v6 3/5] Implement ALTER SYSTEM READ ONLY using global barrier.\n> >\n> > Implementation:\n> >\n> > 1. When a user tried to change server state to WAL-Prohibited using\n> > ALTER SYSTEM READ ONLY command; AlterSystemSetWALProhibitState()\n> > raises request to checkpointer by marking current state to inprogress in\n> > shared memory. Checkpointer, noticing that the current state is has\n>\n> \"is has\"\n>\n> > WALPROHIBIT_TRANSITION_IN_PROGRESS flag set, does the barrier request, and\n> > then acknowledges back to the backend who requested the state change once\n> > the transition has been completed. Final state will be updated in control\n> > file to make it persistent across the system restarts.\n>\n> What makes checkpointer the right backend to do this work?\n>\n\nOnce we've initiated the change to a read-only state, we probably want to\nalways either finish that change or go back to read-write, even if the process\nthat initiated the change is interrupted. Leaving the system in a\nhalf-way-in-between state long term seems bad. Maybe we would have put some\nbackground process, but choose the checkpointer in charge of making the state\nchange and to avoid the new background process to keep the first version patch\nsimple. The checkpointer isn't likely to get killed, but if it does, it will\nbe relaunched and the new one can clean things up. Probably later we might want\nsuch a background worker that will be isn't likely to get killed.\n\n>\n> > 2. When a backend receives the WAL-Prohibited barrier, at that moment if\n> > it is already in a transaction and the transaction already assigned XID,\n> > then the backend will be killed by throwing FATAL(XXX: need more discussion\n> > on this)\n>\n>\n> > 3. Otherwise, if that backend running transaction which yet to get XID\n> > assigned we don't need to do anything special\n>\n> Somewhat garbled sentence...\n>\n>\n> > 4. A new transaction (from existing or new backend) starts as a read-only\n> > transaction.\n>\n> Maybe \"(in an existing or in a new backend)\"?\n>\n>\n> > 5. Autovacuum launcher as well as checkpointer will don't do anything in\n> > WAL-Prohibited server state until someone wakes us up. E.g. a backend\n> > might later on request us to put the system back to read-write.\n>\n> \"will don't do anything\", \"might later on request us\"\n>\n\nOk, I'll fix all of this. I usually don't much focus on the commit message text\nbut I try to make it as much as possible sane enough.\n\n>\n> > 6. At shutdown in WAL-Prohibited mode, we'll skip shutdown checkpoint\n> > and xlog rotation. Starting up again will perform crash recovery(XXX:\n> > need some discussion on this as well) but the end of recovery checkpoint\n> > will be skipped and it will be performed when the system changed to\n> > WAL-Permitted mode.\n>\n> Hm, this has some interesting interactions with some of Thomas' recent\n> hacking.\n>\n\nI would be so thankful for the help.\n\n>\n> > 8. Only super user can toggle WAL-Prohibit state.\n>\n> Hm. I don't quite agree with this. We try to avoid if (superuser())\n> style checks these days, because they can't be granted to other\n> users. Look at how e.g. pg_promote() - an operation of similar severity\n> - is handled. We just revoke the permission from public in\n> system_views.sql:\n> REVOKE EXECUTE ON FUNCTION pg_promote(boolean, integer) FROM public;\n>\n\nOk, currently we don't have SQL callable function to change the system\nread-write state. Do you want me to add that? If so, any naming suggesting? How\nabout pg_make_system_read_only(bool) or have two function as\npg_make_system_read_only(void) & pg_make_system_read_write(void).\n\n>\n> > 9. Add system_is_read_only GUC show the system state -- will true when system\n> > is wal prohibited or in recovery.\n>\n> *shows the system state. There's also some oddity in the second part of\n> the sentence.\n>\n> Is it really correct to show system_is_read_only as true during\n> recovery? For one, recovery could end soon after, putting the system\n> into r/w mode, if it wasn't actually ALTER SYSTEM READ ONLY'd. But also,\n> during recovery the database state actually changes if there are changes\n> to replay. ISTM it would not be a good idea to mix ASRO and\n> pg_is_in_recovery() into one GUC.\n>\n\nWell, whether the system is in recovery or wal prohibited state it is read-only\nfor the user perspective, isn't it?\n\n>\n> > --- /dev/null\n> > +++ b/src/backend/access/transam/walprohibit.c\n> > @@ -0,0 +1,321 @@\n> > +/*-------------------------------------------------------------------------\n> > + *\n> > + * walprohibit.c\n> > + * PostgreSQL write-ahead log prohibit states\n> > + *\n> > + *\n> > + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n> > + *\n> > + * src/backend/access/transam/walprohibit.c\n> > + *\n> > + *-------------------------------------------------------------------------\n> > + */\n> > +#include \"postgres.h\"\n> > +\n> > +#include \"access/walprohibit.h\"\n> > +#include \"pgstat.h\"\n> > +#include \"port/atomics.h\"\n> > +#include \"postmaster/bgwriter.h\"\n> > +#include \"storage/condition_variable.h\"\n> > +#include \"storage/procsignal.h\"\n> > +#include \"storage/shmem.h\"\n> > +\n> > +/*\n> > + * Shared-memory WAL prohibit state\n> > + */\n> > +typedef struct WALProhibitStateData\n> > +{\n> > + /* Indicates current WAL prohibit state */\n> > + pg_atomic_uint32 SharedWALProhibitState;\n> > +\n> > + /* Startup checkpoint pending */\n> > + bool checkpointPending;\n> > +\n> > + /* Signaled when requested WAL prohibit state changes */\n> > + ConditionVariable walprohibit_cv;\n>\n> You're using three different naming styles for as many members.\n>\n\nIll fix in the next version.\n\n>\n> > +/*\n> > + * ProcessBarrierWALProhibit()\n> > + *\n> > + * Handle WAL prohibit state change request.\n> > + */\n> > +bool\n> > +ProcessBarrierWALProhibit(void)\n> > +{\n> > + /*\n> > + * Kill off any transactions that have an XID *before* allowing the system\n> > + * to go WAL prohibit state.\n> > + */\n> > + if (FullTransactionIdIsValid(GetTopFullTransactionIdIfAny()))\n>\n> Hm. I wonder if this check is good enough. If you look at\n> RecordTransactionCommit() we also WAL log in some cases where no xid was\n> assigned. This is particularly true of (auto-)vacuum, but also for HOT\n> pruning.\n>\n> I think it'd be good to put the logic of this check into xlog.c and\n> mirror the logic in RecordTransactionCommit(). And add cross-referencing\n> comments to RecordTransactionCommit and the new function, reminding our\n> futures selves that both places need to be modified.\n>\n\nI am not sure I have understood this, here is the snip from the implementation\ndetail from the first post[2]:\n\n\"Open transactions that don't have an XID are not killed, but will get an ERROR\nif they try to acquire an XID later, or if they try to write WAL without\nacquiring an XID (e.g. VACUUM). To make that happen, the patch adds a new\ncoding rule: a critical section that will write WAL must be preceded by a call\nto CheckWALPermitted(), AssertWALPermitted(), or AssertWALPermitted_HaveXID().\nThe latter variants are used when we know for certain that inserting WAL here\nmust be OK, either because we have an XID (we would have been killed by a change\nto read-only if one had occurred) or for some other reason.\"\n\nDo let me know if you want further clarification.\n\n>\n> > + {\n> > + /* Should be here only for the WAL prohibit state. */\n> > + Assert(GetWALProhibitState() & WALPROHIBIT_STATE_READ_ONLY);\n>\n> There are no races where an ASRO READ ONLY is quickly followed by ASRO\n> READ WRITE where this could be reached?\n>\n\nNo, right now SetWALProhibitState() doesn't allow two transient wal prohibit\nstates at a time.\n\n>\n> > +/*\n> > + * AlterSystemSetWALProhibitState()\n> > + *\n> > + * Execute ALTER SYSTEM READ { ONLY | WRITE } statement.\n> > + */\n> > +void\n> > +AlterSystemSetWALProhibitState(AlterSystemWALProhibitState *stmt)\n> > +{\n> > + uint32 state;\n> > +\n> > + if (!superuser())\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > + errmsg(\"must be superuser to execute ALTER SYSTEM command\")));\n>\n> See comments about this above.\n>\n>\n> > + /* Alter WAL prohibit state not allowed during recovery */\n> > + PreventCommandDuringRecovery(\"ALTER SYSTEM\");\n> > +\n> > + /* Requested state */\n> > + state = stmt->WALProhibited ?\n> > + WALPROHIBIT_STATE_READ_ONLY : WALPROHIBIT_STATE_READ_WRITE;\n> > +\n> > + /*\n> > + * Since we yet to convey this WAL prohibit state to all backend mark it\n> > + * in-progress.\n> > + */\n> > + state |= WALPROHIBIT_TRANSITION_IN_PROGRESS;\n> > +\n> > + if (!SetWALProhibitState(state))\n> > + return; /* server is already in the desired state */\n> > +\n>\n> This use of bitmasks seems unnecessary to me. I'd rather have one param\n> for WALPROHIBIT_STATE_READ_ONLY / WALPROHIBIT_STATE_READ_WRITE and one\n> for WALPROHIBIT_TRANSITION_IN_PROGRESS\n>\n\nOk.\n\nHow about the new version of SetWALProhibitState function as :\nSetWALProhibitState(bool wal_prohibited, bool is_final_state) ?\n\n>\n>\n> > +/*\n> > + * RequestWALProhibitChange()\n> > + *\n> > + * Request checkpointer to make the WALProhibitState to read-only.\n> > + */\n> > +static void\n> > +RequestWALProhibitChange(void)\n> > +{\n> > + /* Must not be called from checkpointer */\n> > + Assert(!AmCheckpointerProcess());\n> > + Assert(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> > +\n> > + /*\n> > + * If in a standalone backend, just do it ourselves.\n> > + */\n> > + if (!IsPostmasterEnvironment)\n> > + {\n> > + CompleteWALProhibitChange(GetWALProhibitState());\n> > + return;\n> > + }\n> > +\n> > + send_signal_to_checkpointer(SIGINT);\n> > +\n> > + /* Wait for the state to change to read-only */\n> > + ConditionVariablePrepareToSleep(&WALProhibitState->walprohibit_cv);\n> > + for (;;)\n> > + {\n> > + /* We'll be done once in-progress flag bit is cleared */\n> > + if (!(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > + break;\n> > +\n> > + ConditionVariableSleep(&WALProhibitState->walprohibit_cv,\n> > + WAIT_EVENT_WALPROHIBIT_STATE_CHANGE);\n> > + }\n> > + ConditionVariableCancelSleep();\n>\n> What if somebody concurrently changes the state back to READ WRITE?\n> Won't we unnecessarily wait here?\n>\n\nYes, there will be wait.\n\n> That's probably fine, because we would just wait until that transition\n> is complete too. But at least a comment about that would be\n> good. Alternatively a \"ASRO transitions completed counter\" or such might\n> be a better idea?\n>\n\nOk, will add comments but could you please elaborate little a bit about \"ASRO\ntransitions completed counter\" and is there any existing counter I can refer\nto?\n\n>\n> > +/*\n> > + * CompleteWALProhibitChange()\n> > + *\n> > + * Checkpointer will call this to complete the requested WAL prohibit state\n> > + * transition.\n> > + */\n> > +void\n> > +CompleteWALProhibitChange(uint32 wal_state)\n> > +{\n> > + uint64 barrierGeneration;\n> > +\n> > + /*\n> > + * Must be called from checkpointer. Otherwise, it must be single-user\n> > + * backend.\n> > + */\n> > + Assert(AmCheckpointerProcess() || !IsPostmasterEnvironment);\n> > + Assert(wal_state & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> > +\n> > + /*\n> > + * WAL prohibit state change is initiated. We need to complete the state\n> > + * transition by setting requested WAL prohibit state in all backends.\n> > + */\n> > + elog(DEBUG1, \"waiting for backends to adopt requested WAL prohibit state\");\n> > +\n> > + /* Emit global barrier */\n> > + barrierGeneration = EmitProcSignalBarrier(PROCSIGNAL_BARRIER_WALPROHIBIT);\n> > + WaitForProcSignalBarrier(barrierGeneration);\n> > +\n> > + /* And flush all writes. */\n> > + XLogFlush(GetXLogWriteRecPtr());\n>\n> Hm, maybe I'm missing something, but why is the write pointer the right\n> thing to flush? That won't include records that haven't been written to\n> disk yet... We also need to trigger writing out all WAL that is as of\n> yet unwritten, no? Without having thought a lot about it, it seems that\n> GetXLogInsertRecPtr() would be the right thing to flush?\n>\n\nTBH, I am not an expert in this area. I wants to flush the latest record\npointer that needs to be flushed, I think GetXLogInsertRecPtr() would be fine\nif is the latest one. Note that wal flushes are not blocked in read-only mode.\n\n>\n> > + /* Set final state by clearing in-progress flag bit */\n> > + if (SetWALProhibitState(wal_state & ~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))\n> > + {\n> > + bool wal_prohibited;\n> > +\n> > + wal_prohibited = (wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0;\n> > +\n> > + /* Update the control file to make state persistent */\n> > + SetControlFileWALProhibitFlag(wal_prohibited);\n>\n> Hm. Is there an issue with not WAL logging the control file change? Is\n> there a scenario where we a crash + recovery would end up overwriting\n> this?\n>\n\nI am not sure. If the system crash before update this that means we haven't\nacknowledged the system state change. And the server will be restarted with the\nprevious state.\n\nCould you please explain what bothering you.\n\n>\n> > + if (wal_prohibited)\n> > + ereport(LOG, (errmsg(\"system is now read only\")));\n> > + else\n> > + {\n> > + /*\n> > + * Request checkpoint if the end-of-recovery checkpoint has been\n> > + * skipped previously.\n> > + */\n> > + if (WALProhibitState->checkpointPending)\n> > + {\n> > + RequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |\n> > + CHECKPOINT_IMMEDIATE);\n> > + WALProhibitState->checkpointPending = false;\n> > + }\n> > + ereport(LOG, (errmsg(\"system is now read write\")));\n> > + }\n> > + }\n> > +\n> > + /* Wake up the backend who requested the state change */\n> > + ConditionVariableBroadcast(&WALProhibitState->walprohibit_cv);\n>\n> Could be multiple backends, right?\n>\n\nYes, you are correct, will fix that.\n\n>\n> > +}\n> > +\n> > +/*\n> > + * GetWALProhibitState()\n> > + *\n> > + * Atomically return the current server WAL prohibited state\n> > + */\n> > +uint32\n> > +GetWALProhibitState(void)\n> > +{\n> > + return pg_atomic_read_u32(&WALProhibitState->SharedWALProhibitState);\n> > +}\n>\n> Is there an issue with needing memory barriers here?\n>\n>\n> > +/*\n> > + * SetWALProhibitState()\n> > + *\n> > + * Change current WAL prohibit state to the input state.\n> > + *\n> > + * If the server is already completely moved to the requested WAL prohibit\n> > + * state, or if the desired state is same as the current state, return false,\n> > + * indicating that the server state did not change. Else return true.\n> > + */\n> > +bool\n> > +SetWALProhibitState(uint32 new_state)\n> > +{\n> > + bool state_updated = false;\n> > + uint32 cur_state;\n> > +\n> > + cur_state = GetWALProhibitState();\n> > +\n> > + /* Server is already in requested state */\n> > + if (new_state == cur_state ||\n> > + new_state == (cur_state | WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > + return false;\n> > +\n> > + /* Prevent concurrent contrary in progress transition state setting */\n> > + if ((new_state & WALPROHIBIT_TRANSITION_IN_PROGRESS) &&\n> > + (cur_state & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > + {\n> > + if (cur_state & WALPROHIBIT_STATE_READ_ONLY)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"system state transition to read only is already in progress\"),\n> > + errhint(\"Try after sometime again.\")));\n> > + else\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"system state transition to read write is already in progress\"),\n> > + errhint(\"Try after sometime again.\")));\n> > + }\n> > +\n> > + /* Update new state in share memory */\n> > + state_updated =\n> > + pg_atomic_compare_exchange_u32(&WALProhibitState->SharedWALProhibitState,\n> > + &cur_state, new_state);\n> > +\n> > + if (!state_updated)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"system read write state concurrently changed\"),\n> > + errhint(\"Try after sometime again.\")));\n> > +\n>\n> I don't think it's safe to use pg_atomic_compare_exchange_u32() outside\n> of a loop. I think there's platforms (basically all load-linked /\n> store-conditional architectures) where than can fail spuriously.\n>\n> Also, there's no memory barrier around GetWALProhibitState, so there's\n> no guarantee it's not an out-of-date value you're starting with.\n>\n\nHow about having some kind of lock instead what Robert have suggested\npreviously[3] ?\n\n>\n> > +/\n> > + * MarkCheckPointSkippedInWalProhibitState()\n> > + *\n> > + * Sets checkpoint pending flag so that it can be performed next time while\n> > + * changing system state to WAL permitted.\n> > + */\n> > +void\n> > +MarkCheckPointSkippedInWalProhibitState(void)\n> > +{\n> > + WALProhibitState->checkpointPending = true;\n> > +}\n>\n> I don't *at all* like this living outside of xlog.c. I think this should\n> be moved there, and merged with deferring checkpoints in other cases\n> (promotions, not immediately performing a checkpoint after recovery).\n\nHere we want to perform the checkpoint sometime quite later when the\nsystem state changes to read-write. For that, I think we need some flag\nif we want this in xlog.c then we can have that flag in XLogCtl.\n\n\n> There's state in ControlFile *and* here for essentially the same thing.\n>\n\nI am sorry to trouble you much, but I haven't understood this too.\n\n>\n>\n> > + * If it is not currently possible to insert write-ahead log records,\n> > + * either because we are still in recovery or because ALTER SYSTEM READ\n> > + * ONLY has been executed, force this to be a read-only transaction.\n> > + * We have lower level defences in XLogBeginInsert() and elsewhere to stop\n> > + * us from modifying data during recovery when !XLogInsertAllowed(), but\n> > + * this gives the normal indication to the user that the transaction is\n> > + * read-only.\n> > + *\n> > + * On the other hand, we only need to set the startedInRecovery flag when\n> > + * the transaction started during recovery, and not when WAL is otherwise\n> > + * prohibited. This information is used by RelationGetIndexScan() to\n> > + * decide whether to permit (1) relying on existing killed-tuple markings\n> > + * and (2) further killing of index tuples. Even when WAL is prohibited\n> > + * on the master, it's still the master, so the former is OK; and since\n> > + * killing index tuples doesn't generate WAL, the latter is also OK.\n> > + * See comments in RelationGetIndexScan() and MarkBufferDirtyHint().\n> > + */\n> > + XactReadOnly = DefaultXactReadOnly || !XLogInsertAllowed();\n> > + s->startedInRecovery = RecoveryInProgress();\n>\n> It's somewhat ugly that we call RecoveryInProgress() once in\n> XLogInsertAllowed() and then again directly here... It's probably fine\n> runtime cost wise, but...\n>\n>\n> > /*\n> > * Subroutine to try to fetch and validate a prior checkpoint record.\n> > *\n> > @@ -8508,9 +8564,13 @@ ShutdownXLOG(int code, Datum arg)\n> > */\n> > WalSndWaitStopping();\n> >\n> > + /*\n> > + * The restartpoint, checkpoint, or xlog rotation will be performed if the\n> > + * WAL writing is permitted.\n> > + */\n> > if (RecoveryInProgress())\n> > CreateRestartPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n> > - else\n> > + else if (XLogInsertAllowed())\n>\n> Not sure I like going via XLogInsertAllowed(), that seems like a\n> confusing indirection here. And it encompasses things we atually don't\n> want to check for - it's fragile to also look at LocalXLogInsertAllowed\n> here imo.\n>\n>\n> > ShutdownCLOG();\n> > ShutdownCommitTs();\n> > ShutdownSUBTRANS();\n> > diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\n> > index 1b8cd7bacd4..aa4cdd57ec1 100644\n> > --- a/src/backend/postmaster/autovacuum.c\n> > +++ b/src/backend/postmaster/autovacuum.c\n> > @@ -652,6 +652,10 @@ AutoVacLauncherMain(int argc, char *argv[])\n> >\n> > HandleAutoVacLauncherInterrupts();\n> >\n> > + /* If the server is read only just go back to sleep. */\n> > + if (!XLogInsertAllowed())\n> > + continue;\n> > +\n>\n> I think we really should have a different functions for places like\n> this. We don't want to generally hide bugs like e.g. starting the\n> autovac launcher in recovery, but this would.\n>\n\nSo, we need a separate function like XLogInsertAllowed() and a global variable\nlike LocalXLogInsertAllowed for the caching wal prohibit state.\n\n>\n> > @@ -342,6 +344,28 @@ CheckpointerMain(void)\n> > AbsorbSyncRequests();\n> > HandleCheckpointerInterrupts();\n> >\n> > + wal_state = GetWALProhibitState();\n> > +\n> > + if (wal_state & WALPROHIBIT_TRANSITION_IN_PROGRESS)\n> > + {\n> > + /* Complete WAL prohibit state change request */\n> > + CompleteWALProhibitChange(wal_state);\n> > + continue;\n> > + }\n> > + else if (wal_state & WALPROHIBIT_STATE_READ_ONLY)\n> > + {\n> > + /*\n> > + * Don't do anything until someone wakes us up. For example a\n> > + * backend might later on request us to put the system back to\n> > + * read-write wal prohibit sate.\n> > + */\n> > + (void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, -1,\n> > + WAIT_EVENT_CHECKPOINTER_MAIN);\n> > + continue;\n> > + }\n> > + Assert(wal_state == WALPROHIBIT_STATE_READ_WRITE);\n> > +\n> > /*\n> > * Detect a pending checkpoint request by checking whether the flags\n> > * word in shared memory is nonzero. We shouldn't need to acquire the\n> > @@ -1323,3 +1347,16 @@ FirstCallSinceLastCheckpoint(void)\n> >\n> > return FirstCall;\n> > }\n>\n> So, if we're in the middle of a paced checkpoint with a large\n> checkpoint_timeout - a sensible real world configuration - we'll not\n> process ASRO until that checkpoint is over? That seems very much not\n> practical. What am I missing?\n>\n\nYes, the process doing ASRO will wait until that checkpoint is over.\n\n>\n> > +/*\n> > + * send_signal_to_checkpointer allows a process to send a signal to the checkpoint process.\n> > + */\n> > +void\n> > +send_signal_to_checkpointer(int signum)\n> > +{\n> > + if (CheckpointerShmem->checkpointer_pid == 0)\n> > + elog(ERROR, \"checkpointer is not running\");\n> > +\n> > + if (kill(CheckpointerShmem->checkpointer_pid, signum) != 0)\n> > + elog(ERROR, \"could not signal checkpointer: %m\");\n> > +}\n>\n> Sudden switch to a different naming style...\n>\n\nMy bad, sorry, will fix that.\n\nRegards,\nAmul\n\n1] http://postgr.es/m/20200724020402.2byiiufsd7pw4hsp@alap3.anarazel.de\n2] http://postgr.es/m/CAAJ_b97KZzdJsffwRK7w0XU5HnXkcgKgTR69t8cOZztsyXjkQw@mail.gmail.com\n3] http://postgr.es/m/CA+TgmoYMyw-m3O5XQ8tRy4mdEArGcfXr+9niO5Fmq1wVdKxYmQ@mail.gmail.com\n\n\n",
"msg_date": "Sat, 12 Sep 2020 10:52:38 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi Andres,\n\nThe attached patch has fixed the issue that you have raised & I have confirmed\nin my previous email. Also, I tried to improve some of the things that you have\npointed but for those changes, I am a little unsure and looking forward to the\ninputs/suggestions/confirmation on that, therefore 0003 patch is marked WIP.\n\nPlease have a look at my inline reply below for the things that are changes in\nthe attached version and need inputs:\n\nOn Sat, Sep 12, 2020 at 10:52 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Sep 10, 2020 at 2:33 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n>\n> Thanks for your time.\n>\n> >\n> > Thomas, there's one point below that could be relevant for you. You can\n> > search for your name and/or checkpoint...\n> >\n> >\n> > On 2020-09-01 16:43:10 +0530, Amul Sul wrote:\n> > > diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c\n> > > index 42050ab7195..0ac826d3c2f 100644\n> > > --- a/src/backend/nodes/readfuncs.c\n> > > +++ b/src/backend/nodes/readfuncs.c\n> > > @@ -2552,6 +2552,19 @@ _readAlternativeSubPlan(void)\n> > > READ_DONE();\n> > > }\n> > >\n> > > +/*\n> > > + * _readAlterSystemWALProhibitState\n> > > + */\n> > > +static AlterSystemWALProhibitState *\n> > > +_readAlterSystemWALProhibitState(void)\n> > > +{\n> > > + READ_LOCALS(AlterSystemWALProhibitState);\n> > > +\n> > > + READ_BOOL_FIELD(WALProhibited);\n> > > +\n> > > + READ_DONE();\n> > > +}\n> > > +\n> >\n> > Why do we need readfuncs support for this?\n> >\n>\n> I thought we need that from your previous comment[1].\n>\n> > > +\n> > > +/*\n> > > + * AlterSystemSetWALProhibitState\n> > > + *\n> > > + * Execute ALTER SYSTEM READ { ONLY | WRITE } statement.\n> > > + */\n> > > +static void\n> > > +AlterSystemSetWALProhibitState(AlterSystemWALProhibitState *stmt)\n> > > +{\n> > > + /* some code */\n> > > + elog(INFO, \"AlterSystemSetWALProhibitState() called\");\n> > > +}\n> >\n> > As long as it's not implemented it seems better to return an ERROR.\n> >\n>\n> Ok, will add an error in the next version.\n>\n> > > @@ -3195,6 +3195,16 @@ typedef struct AlterSystemStmt\n> > > VariableSetStmt *setstmt; /* SET subcommand */\n> > > } AlterSystemStmt;\n> > >\n> > > +/* ----------------------\n> > > + * Alter System Read Statement\n> > > + * ----------------------\n> > > + */\n> > > +typedef struct AlterSystemWALProhibitState\n> > > +{\n> > > + NodeTag type;\n> > > + bool WALProhibited;\n> > > +} AlterSystemWALProhibitState;\n> > > +\n> >\n> > All the nearby fields use under_score_style names.\n> >\n>\n> I am not sure which nearby fields having the underscore that you are referring\n> to. Probably \"WALProhibited\" needs to be renamed to \"walprohibited\" to be\n> inline with the nearby fields.\n>\n> >\n> > > From f59329e4a7285c5b132ca74473fe88e5ba537254 Mon Sep 17 00:00:00 2001\n> > > From: Amul Sul <amul.sul@enterprisedb.com>\n> > > Date: Fri, 19 Jun 2020 06:29:36 -0400\n> > > Subject: [PATCH v6 3/5] Implement ALTER SYSTEM READ ONLY using global barrier.\n> > >\n> > > Implementation:\n> > >\n> > > 1. When a user tried to change server state to WAL-Prohibited using\n> > > ALTER SYSTEM READ ONLY command; AlterSystemSetWALProhibitState()\n> > > raises request to checkpointer by marking current state to inprogress in\n> > > shared memory. Checkpointer, noticing that the current state is has\n> >\n> > \"is has\"\n> >\n> > > WALPROHIBIT_TRANSITION_IN_PROGRESS flag set, does the barrier request, and\n> > > then acknowledges back to the backend who requested the state change once\n> > > the transition has been completed. Final state will be updated in control\n> > > file to make it persistent across the system restarts.\n> >\n> > What makes checkpointer the right backend to do this work?\n> >\n>\n> Once we've initiated the change to a read-only state, we probably want to\n> always either finish that change or go back to read-write, even if the process\n> that initiated the change is interrupted. Leaving the system in a\n> half-way-in-between state long term seems bad. Maybe we would have put some\n> background process, but choose the checkpointer in charge of making the state\n> change and to avoid the new background process to keep the first version patch\n> simple. The checkpointer isn't likely to get killed, but if it does, it will\n> be relaunched and the new one can clean things up. Probably later we might want\n> such a background worker that will be isn't likely to get killed.\n>\n> >\n> > > 2. When a backend receives the WAL-Prohibited barrier, at that moment if\n> > > it is already in a transaction and the transaction already assigned XID,\n> > > then the backend will be killed by throwing FATAL(XXX: need more discussion\n> > > on this)\n> >\n> >\n> > > 3. Otherwise, if that backend running transaction which yet to get XID\n> > > assigned we don't need to do anything special\n> >\n> > Somewhat garbled sentence...\n> >\n> >\n> > > 4. A new transaction (from existing or new backend) starts as a read-only\n> > > transaction.\n> >\n> > Maybe \"(in an existing or in a new backend)\"?\n> >\n> >\n> > > 5. Autovacuum launcher as well as checkpointer will don't do anything in\n> > > WAL-Prohibited server state until someone wakes us up. E.g. a backend\n> > > might later on request us to put the system back to read-write.\n> >\n> > \"will don't do anything\", \"might later on request us\"\n> >\n>\n> Ok, I'll fix all of this. I usually don't much focus on the commit message text\n> but I try to make it as much as possible sane enough.\n>\n> >\n> > > 6. At shutdown in WAL-Prohibited mode, we'll skip shutdown checkpoint\n> > > and xlog rotation. Starting up again will perform crash recovery(XXX:\n> > > need some discussion on this as well) but the end of recovery checkpoint\n> > > will be skipped and it will be performed when the system changed to\n> > > WAL-Permitted mode.\n> >\n> > Hm, this has some interesting interactions with some of Thomas' recent\n> > hacking.\n> >\n>\n> I would be so thankful for the help.\n>\n> >\n> > > 8. Only super user can toggle WAL-Prohibit state.\n> >\n> > Hm. I don't quite agree with this. We try to avoid if (superuser())\n> > style checks these days, because they can't be granted to other\n> > users. Look at how e.g. pg_promote() - an operation of similar severity\n> > - is handled. We just revoke the permission from public in\n> > system_views.sql:\n> > REVOKE EXECUTE ON FUNCTION pg_promote(boolean, integer) FROM public;\n> >\n>\n> Ok, currently we don't have SQL callable function to change the system\n> read-write state. Do you want me to add that? If so, any naming suggesting? How\n> about pg_make_system_read_only(bool) or have two function as\n> pg_make_system_read_only(void) & pg_make_system_read_write(void).\n>\n\nIn the attached version I added SQL callable function as\npg_alter_wal_prohibit_state(bool), and another suggestion for the naming is\nwelcome.\n\nFor the permission denied error for ASRO READ-ONLY/READ-WRITE, I have added\nereport() in AlterSystemSetWALProhibitState() instead of aclcheck_error() and\nthe hint is added. Any suggestions?\n\n> >\n> > > 9. Add system_is_read_only GUC show the system state -- will true when system\n> > > is wal prohibited or in recovery.\n> >\n> > *shows the system state. There's also some oddity in the second part of\n> > the sentence.\n> >\n> > Is it really correct to show system_is_read_only as true during\n> > recovery? For one, recovery could end soon after, putting the system\n> > into r/w mode, if it wasn't actually ALTER SYSTEM READ ONLY'd. But also,\n> > during recovery the database state actually changes if there are changes\n> > to replay. ISTM it would not be a good idea to mix ASRO and\n> > pg_is_in_recovery() into one GUC.\n> >\n>\n> Well, whether the system is in recovery or wal prohibited state it is read-only\n> for the user perspective, isn't it?\n>\n> >\n> > > --- /dev/null\n> > > +++ b/src/backend/access/transam/walprohibit.c\n> > > @@ -0,0 +1,321 @@\n> > > +/*-------------------------------------------------------------------------\n> > > + *\n> > > + * walprohibit.c\n> > > + * PostgreSQL write-ahead log prohibit states\n> > > + *\n> > > + *\n> > > + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n> > > + *\n> > > + * src/backend/access/transam/walprohibit.c\n> > > + *\n> > > + *-------------------------------------------------------------------------\n> > > + */\n> > > +#include \"postgres.h\"\n> > > +\n> > > +#include \"access/walprohibit.h\"\n> > > +#include \"pgstat.h\"\n> > > +#include \"port/atomics.h\"\n> > > +#include \"postmaster/bgwriter.h\"\n> > > +#include \"storage/condition_variable.h\"\n> > > +#include \"storage/procsignal.h\"\n> > > +#include \"storage/shmem.h\"\n> > > +\n> > > +/*\n> > > + * Shared-memory WAL prohibit state\n> > > + */\n> > > +typedef struct WALProhibitStateData\n> > > +{\n> > > + /* Indicates current WAL prohibit state */\n> > > + pg_atomic_uint32 SharedWALProhibitState;\n> > > +\n> > > + /* Startup checkpoint pending */\n> > > + bool checkpointPending;\n> > > +\n> > > + /* Signaled when requested WAL prohibit state changes */\n> > > + ConditionVariable walprohibit_cv;\n> >\n> > You're using three different naming styles for as many members.\n> >\n>\n> Ill fix in the next version.\n>\n> >\n> > > +/*\n> > > + * ProcessBarrierWALProhibit()\n> > > + *\n> > > + * Handle WAL prohibit state change request.\n> > > + */\n> > > +bool\n> > > +ProcessBarrierWALProhibit(void)\n> > > +{\n> > > + /*\n> > > + * Kill off any transactions that have an XID *before* allowing the system\n> > > + * to go WAL prohibit state.\n> > > + */\n> > > + if (FullTransactionIdIsValid(GetTopFullTransactionIdIfAny()))\n> >\n> > Hm. I wonder if this check is good enough. If you look at\n> > RecordTransactionCommit() we also WAL log in some cases where no xid was\n> > assigned. This is particularly true of (auto-)vacuum, but also for HOT\n> > pruning.\n> >\n> > I think it'd be good to put the logic of this check into xlog.c and\n> > mirror the logic in RecordTransactionCommit(). And add cross-referencing\n> > comments to RecordTransactionCommit and the new function, reminding our\n> > futures selves that both places need to be modified.\n> >\n>\n> I am not sure I have understood this, here is the snip from the implementation\n> detail from the first post[2]:\n>\n> \"Open transactions that don't have an XID are not killed, but will get an ERROR\n> if they try to acquire an XID later, or if they try to write WAL without\n> acquiring an XID (e.g. VACUUM). To make that happen, the patch adds a new\n> coding rule: a critical section that will write WAL must be preceded by a call\n> to CheckWALPermitted(), AssertWALPermitted(), or AssertWALPermitted_HaveXID().\n> The latter variants are used when we know for certain that inserting WAL here\n> must be OK, either because we have an XID (we would have been killed by a change\n> to read-only if one had occurred) or for some other reason.\"\n>\n> Do let me know if you want further clarification.\n>\n> >\n> > > + {\n> > > + /* Should be here only for the WAL prohibit state. */\n> > > + Assert(GetWALProhibitState() & WALPROHIBIT_STATE_READ_ONLY);\n> >\n> > There are no races where an ASRO READ ONLY is quickly followed by ASRO\n> > READ WRITE where this could be reached?\n> >\n>\n> No, right now SetWALProhibitState() doesn't allow two transient wal prohibit\n> states at a time.\n>\n> >\n> > > +/*\n> > > + * AlterSystemSetWALProhibitState()\n> > > + *\n> > > + * Execute ALTER SYSTEM READ { ONLY | WRITE } statement.\n> > > + */\n> > > +void\n> > > +AlterSystemSetWALProhibitState(AlterSystemWALProhibitState *stmt)\n> > > +{\n> > > + uint32 state;\n> > > +\n> > > + if (!superuser())\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > > + errmsg(\"must be superuser to execute ALTER SYSTEM command\")));\n> >\n> > See comments about this above.\n> >\n> >\n> > > + /* Alter WAL prohibit state not allowed during recovery */\n> > > + PreventCommandDuringRecovery(\"ALTER SYSTEM\");\n> > > +\n> > > + /* Requested state */\n> > > + state = stmt->WALProhibited ?\n> > > + WALPROHIBIT_STATE_READ_ONLY : WALPROHIBIT_STATE_READ_WRITE;\n> > > +\n> > > + /*\n> > > + * Since we yet to convey this WAL prohibit state to all backend mark it\n> > > + * in-progress.\n> > > + */\n> > > + state |= WALPROHIBIT_TRANSITION_IN_PROGRESS;\n> > > +\n> > > + if (!SetWALProhibitState(state))\n> > > + return; /* server is already in the desired state */\n> > > +\n> >\n> > This use of bitmasks seems unnecessary to me. I'd rather have one param\n> > for WALPROHIBIT_STATE_READ_ONLY / WALPROHIBIT_STATE_READ_WRITE and one\n> > for WALPROHIBIT_TRANSITION_IN_PROGRESS\n> >\n>\n> Ok.\n>\n> How about the new version of SetWALProhibitState function as :\n> SetWALProhibitState(bool wal_prohibited, bool is_final_state) ?\n>\n\nI have added the same.\n\n> >\n> >\n> > > +/*\n> > > + * RequestWALProhibitChange()\n> > > + *\n> > > + * Request checkpointer to make the WALProhibitState to read-only.\n> > > + */\n> > > +static void\n> > > +RequestWALProhibitChange(void)\n> > > +{\n> > > + /* Must not be called from checkpointer */\n> > > + Assert(!AmCheckpointerProcess());\n> > > + Assert(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> > > +\n> > > + /*\n> > > + * If in a standalone backend, just do it ourselves.\n> > > + */\n> > > + if (!IsPostmasterEnvironment)\n> > > + {\n> > > + CompleteWALProhibitChange(GetWALProhibitState());\n> > > + return;\n> > > + }\n> > > +\n> > > + send_signal_to_checkpointer(SIGINT);\n> > > +\n> > > + /* Wait for the state to change to read-only */\n> > > + ConditionVariablePrepareToSleep(&WALProhibitState->walprohibit_cv);\n> > > + for (;;)\n> > > + {\n> > > + /* We'll be done once in-progress flag bit is cleared */\n> > > + if (!(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > > + break;\n> > > +\n> > > + ConditionVariableSleep(&WALProhibitState->walprohibit_cv,\n> > > + WAIT_EVENT_WALPROHIBIT_STATE_CHANGE);\n> > > + }\n> > > + ConditionVariableCancelSleep();\n> >\n> > What if somebody concurrently changes the state back to READ WRITE?\n> > Won't we unnecessarily wait here?\n> >\n>\n> Yes, there will be wait.\n>\n> > That's probably fine, because we would just wait until that transition\n> > is complete too. But at least a comment about that would be\n> > good. Alternatively a \"ASRO transitions completed counter\" or such might\n> > be a better idea?\n> >\n>\n> Ok, will add comments but could you please elaborate little a bit about \"ASRO\n> transitions completed counter\" and is there any existing counter I can refer\n> to?\n>\n> >\n> > > +/*\n> > > + * CompleteWALProhibitChange()\n> > > + *\n> > > + * Checkpointer will call this to complete the requested WAL prohibit state\n> > > + * transition.\n> > > + */\n> > > +void\n> > > +CompleteWALProhibitChange(uint32 wal_state)\n> > > +{\n> > > + uint64 barrierGeneration;\n> > > +\n> > > + /*\n> > > + * Must be called from checkpointer. Otherwise, it must be single-user\n> > > + * backend.\n> > > + */\n> > > + Assert(AmCheckpointerProcess() || !IsPostmasterEnvironment);\n> > > + Assert(wal_state & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> > > +\n> > > + /*\n> > > + * WAL prohibit state change is initiated. We need to complete the state\n> > > + * transition by setting requested WAL prohibit state in all backends.\n> > > + */\n> > > + elog(DEBUG1, \"waiting for backends to adopt requested WAL prohibit state\");\n> > > +\n> > > + /* Emit global barrier */\n> > > + barrierGeneration = EmitProcSignalBarrier(PROCSIGNAL_BARRIER_WALPROHIBIT);\n> > > + WaitForProcSignalBarrier(barrierGeneration);\n> > > +\n> > > + /* And flush all writes. */\n> > > + XLogFlush(GetXLogWriteRecPtr());\n> >\n> > Hm, maybe I'm missing something, but why is the write pointer the right\n> > thing to flush? That won't include records that haven't been written to\n> > disk yet... We also need to trigger writing out all WAL that is as of\n> > yet unwritten, no? Without having thought a lot about it, it seems that\n> > GetXLogInsertRecPtr() would be the right thing to flush?\n> >\n>\n> TBH, I am not an expert in this area. I wants to flush the latest record\n> pointer that needs to be flushed, I think GetXLogInsertRecPtr() would be fine\n> if is the latest one. Note that wal flushes are not blocked in read-only mode.\n>\n\nUsed GetXLogInsertRecPtr().\n\n> >\n> > > + /* Set final state by clearing in-progress flag bit */\n> > > + if (SetWALProhibitState(wal_state & ~(WALPROHIBIT_TRANSITION_IN_PROGRESS)))\n> > > + {\n> > > + bool wal_prohibited;\n> > > +\n> > > + wal_prohibited = (wal_state & WALPROHIBIT_STATE_READ_ONLY) != 0;\n> > > +\n> > > + /* Update the control file to make state persistent */\n> > > + SetControlFileWALProhibitFlag(wal_prohibited);\n> >\n> > Hm. Is there an issue with not WAL logging the control file change? Is\n> > there a scenario where we a crash + recovery would end up overwriting\n> > this?\n> >\n>\n> I am not sure. If the system crash before update this that means we haven't\n> acknowledged the system state change. And the server will be restarted with the\n> previous state.\n>\n> Could you please explain what bothering you.\n>\n> >\n> > > + if (wal_prohibited)\n> > > + ereport(LOG, (errmsg(\"system is now read only\")));\n> > > + else\n> > > + {\n> > > + /*\n> > > + * Request checkpoint if the end-of-recovery checkpoint has been\n> > > + * skipped previously.\n> > > + */\n> > > + if (WALProhibitState->checkpointPending)\n> > > + {\n> > > + RequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |\n> > > + CHECKPOINT_IMMEDIATE);\n> > > + WALProhibitState->checkpointPending = false;\n> > > + }\n> > > + ereport(LOG, (errmsg(\"system is now read write\")));\n> > > + }\n> > > + }\n> > > +\n> > > + /* Wake up the backend who requested the state change */\n> > > + ConditionVariableBroadcast(&WALProhibitState->walprohibit_cv);\n> >\n> > Could be multiple backends, right?\n> >\n>\n> Yes, you are correct, will fix that.\n>\n> >\n> > > +}\n> > > +\n> > > +/*\n> > > + * GetWALProhibitState()\n> > > + *\n> > > + * Atomically return the current server WAL prohibited state\n> > > + */\n> > > +uint32\n> > > +GetWALProhibitState(void)\n> > > +{\n> > > + return pg_atomic_read_u32(&WALProhibitState->SharedWALProhibitState);\n> > > +}\n> >\n> > Is there an issue with needing memory barriers here?\n> >\n> >\n> > > +/*\n> > > + * SetWALProhibitState()\n> > > + *\n> > > + * Change current WAL prohibit state to the input state.\n> > > + *\n> > > + * If the server is already completely moved to the requested WAL prohibit\n> > > + * state, or if the desired state is same as the current state, return false,\n> > > + * indicating that the server state did not change. Else return true.\n> > > + */\n> > > +bool\n> > > +SetWALProhibitState(uint32 new_state)\n> > > +{\n> > > + bool state_updated = false;\n> > > + uint32 cur_state;\n> > > +\n> > > + cur_state = GetWALProhibitState();\n> > > +\n> > > + /* Server is already in requested state */\n> > > + if (new_state == cur_state ||\n> > > + new_state == (cur_state | WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > > + return false;\n> > > +\n> > > + /* Prevent concurrent contrary in progress transition state setting */\n> > > + if ((new_state & WALPROHIBIT_TRANSITION_IN_PROGRESS) &&\n> > > + (cur_state & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > > + {\n> > > + if (cur_state & WALPROHIBIT_STATE_READ_ONLY)\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > > + errmsg(\"system state transition to read only is already in progress\"),\n> > > + errhint(\"Try after sometime again.\")));\n> > > + else\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > > + errmsg(\"system state transition to read write is already in progress\"),\n> > > + errhint(\"Try after sometime again.\")));\n> > > + }\n> > > +\n> > > + /* Update new state in share memory */\n> > > + state_updated =\n> > > + pg_atomic_compare_exchange_u32(&WALProhibitState->SharedWALProhibitState,\n> > > + &cur_state, new_state);\n> > > +\n> > > + if (!state_updated)\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > > + errmsg(\"system read write state concurrently changed\"),\n> > > + errhint(\"Try after sometime again.\")));\n> > > +\n> >\n> > I don't think it's safe to use pg_atomic_compare_exchange_u32() outside\n> > of a loop. I think there's platforms (basically all load-linked /\n> > store-conditional architectures) where than can fail spuriously.\n> >\n> > Also, there's no memory barrier around GetWALProhibitState, so there's\n> > no guarantee it's not an out-of-date value you're starting with.\n> >\n>\n> How about having some kind of lock instead what Robert have suggested\n> previously[3] ?\n>\n\nI would like to discuss this point more. In the attached version I have added\nWALProhibitLock to protect shared walprohibit state updates. I was a little\nunsure do we want another spinlock what XLogCtlData has which is mostly used to\nread the shared variable and for the update, both are used e.g. LogwrtResult.\n\nRight now I haven't added and shared walprohibit state was fetch using a\nvolatile pointer. Do we need a spinlock there, I am not sure why? Thoughts?\n\n> >\n> > > +/\n> > > + * MarkCheckPointSkippedInWalProhibitState()\n> > > + *\n> > > + * Sets checkpoint pending flag so that it can be performed next time while\n> > > + * changing system state to WAL permitted.\n> > > + */\n> > > +void\n> > > +MarkCheckPointSkippedInWalProhibitState(void)\n> > > +{\n> > > + WALProhibitState->checkpointPending = true;\n> > > +}\n> >\n> > I don't *at all* like this living outside of xlog.c. I think this should\n> > be moved there, and merged with deferring checkpoints in other cases\n> > (promotions, not immediately performing a checkpoint after recovery).\n>\n> Here we want to perform the checkpoint sometime quite later when the\n> system state changes to read-write. For that, I think we need some flag\n> if we want this in xlog.c then we can have that flag in XLogCtl.\n>\n\nRight now I have added a new variable to XLogCtlData and moved this code to\nxlog.c.\n\n>\n> > There's state in ControlFile *and* here for essentially the same thing.\n> >\n>\n> I am sorry to trouble you much, but I haven't understood this too.\n>\n> >\n> >\n> > > + * If it is not currently possible to insert write-ahead log records,\n> > > + * either because we are still in recovery or because ALTER SYSTEM READ\n> > > + * ONLY has been executed, force this to be a read-only transaction.\n> > > + * We have lower level defences in XLogBeginInsert() and elsewhere to stop\n> > > + * us from modifying data during recovery when !XLogInsertAllowed(), but\n> > > + * this gives the normal indication to the user that the transaction is\n> > > + * read-only.\n> > > + *\n> > > + * On the other hand, we only need to set the startedInRecovery flag when\n> > > + * the transaction started during recovery, and not when WAL is otherwise\n> > > + * prohibited. This information is used by RelationGetIndexScan() to\n> > > + * decide whether to permit (1) relying on existing killed-tuple markings\n> > > + * and (2) further killing of index tuples. Even when WAL is prohibited\n> > > + * on the master, it's still the master, so the former is OK; and since\n> > > + * killing index tuples doesn't generate WAL, the latter is also OK.\n> > > + * See comments in RelationGetIndexScan() and MarkBufferDirtyHint().\n> > > + */\n> > > + XactReadOnly = DefaultXactReadOnly || !XLogInsertAllowed();\n> > > + s->startedInRecovery = RecoveryInProgress();\n> >\n> > It's somewhat ugly that we call RecoveryInProgress() once in\n> > XLogInsertAllowed() and then again directly here... It's probably fine\n> > runtime cost wise, but...\n> >\n> >\n> > > /*\n> > > * Subroutine to try to fetch and validate a prior checkpoint record.\n> > > *\n> > > @@ -8508,9 +8564,13 @@ ShutdownXLOG(int code, Datum arg)\n> > > */\n> > > WalSndWaitStopping();\n> > >\n> > > + /*\n> > > + * The restartpoint, checkpoint, or xlog rotation will be performed if the\n> > > + * WAL writing is permitted.\n> > > + */\n> > > if (RecoveryInProgress())\n> > > CreateRestartPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n> > > - else\n> > > + else if (XLogInsertAllowed())\n> >\n> > Not sure I like going via XLogInsertAllowed(), that seems like a\n> > confusing indirection here. And it encompasses things we atually don't\n> > want to check for - it's fragile to also look at LocalXLogInsertAllowed\n> > here imo.\n> >\n> >\n> > > ShutdownCLOG();\n> > > ShutdownCommitTs();\n> > > ShutdownSUBTRANS();\n> > > diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\n> > > index 1b8cd7bacd4..aa4cdd57ec1 100644\n> > > --- a/src/backend/postmaster/autovacuum.c\n> > > +++ b/src/backend/postmaster/autovacuum.c\n> > > @@ -652,6 +652,10 @@ AutoVacLauncherMain(int argc, char *argv[])\n> > >\n> > > HandleAutoVacLauncherInterrupts();\n> > >\n> > > + /* If the server is read only just go back to sleep. */\n> > > + if (!XLogInsertAllowed())\n> > > + continue;\n> > > +\n> >\n> > I think we really should have a different functions for places like\n> > this. We don't want to generally hide bugs like e.g. starting the\n> > autovac launcher in recovery, but this would.\n> >\n>\n> So, we need a separate function like XLogInsertAllowed() and a global variable\n> like LocalXLogInsertAllowed for the caching wal prohibit state.\n>\n> >\n> > > @@ -342,6 +344,28 @@ CheckpointerMain(void)\n> > > AbsorbSyncRequests();\n> > > HandleCheckpointerInterrupts();\n> > >\n> > > + wal_state = GetWALProhibitState();\n> > > +\n> > > + if (wal_state & WALPROHIBIT_TRANSITION_IN_PROGRESS)\n> > > + {\n> > > + /* Complete WAL prohibit state change request */\n> > > + CompleteWALProhibitChange(wal_state);\n> > > + continue;\n> > > + }\n> > > + else if (wal_state & WALPROHIBIT_STATE_READ_ONLY)\n> > > + {\n> > > + /*\n> > > + * Don't do anything until someone wakes us up. For example a\n> > > + * backend might later on request us to put the system back to\n> > > + * read-write wal prohibit sate.\n> > > + */\n> > > + (void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, -1,\n> > > + WAIT_EVENT_CHECKPOINTER_MAIN);\n> > > + continue;\n> > > + }\n> > > + Assert(wal_state == WALPROHIBIT_STATE_READ_WRITE);\n> > > +\n> > > /*\n> > > * Detect a pending checkpoint request by checking whether the flags\n> > > * word in shared memory is nonzero. We shouldn't need to acquire the\n> > > @@ -1323,3 +1347,16 @@ FirstCallSinceLastCheckpoint(void)\n> > >\n> > > return FirstCall;\n> > > }\n> >\n> > So, if we're in the middle of a paced checkpoint with a large\n> > checkpoint_timeout - a sensible real world configuration - we'll not\n> > process ASRO until that checkpoint is over? That seems very much not\n> > practical. What am I missing?\n> >\n>\n> Yes, the process doing ASRO will wait until that checkpoint is over.\n>\n> >\n> > > +/*\n> > > + * send_signal_to_checkpointer allows a process to send a signal to the checkpoint process.\n> > > + */\n> > > +void\n> > > +send_signal_to_checkpointer(int signum)\n> > > +{\n> > > + if (CheckpointerShmem->checkpointer_pid == 0)\n> > > + elog(ERROR, \"checkpointer is not running\");\n> > > +\n> > > + if (kill(CheckpointerShmem->checkpointer_pid, signum) != 0)\n> > > + elog(ERROR, \"could not signal checkpointer: %m\");\n> > > +}\n> >\n> > Sudden switch to a different naming style...\n> >\n>\n> My bad, sorry, will fix that.\n>\n> 1] http://postgr.es/m/20200724020402.2byiiufsd7pw4hsp@alap3.anarazel.de\n> 2] http://postgr.es/m/CAAJ_b97KZzdJsffwRK7w0XU5HnXkcgKgTR69t8cOZztsyXjkQw@mail.gmail.com\n> 3] http://postgr.es/m/CA+TgmoYMyw-m3O5XQ8tRy4mdEArGcfXr+9niO5Fmq1wVdKxYmQ@mail.gmail.com\n\n\nThank you !\n\nRegards,\nAmul",
"msg_date": "Tue, 15 Sep 2020 14:35:39 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Sep 8, 2020 at 2:20 PM Andres Freund <andres@anarazel.de> wrote:\n> This pattern seems like it'll get unwieldy with more than one barrier\n> type. And won't flag \"unhandled\" barrier types either (already the case,\n> I know). We could go for something like:\n>\n> while (flags != 0)\n> {\n> barrier_bit = pg_rightmost_one_pos32(flags);\n> barrier_type = 1 >> barrier_bit;\n>\n> switch (barrier_type)\n> {\n> case PROCSIGNAL_BARRIER_PLACEHOLDER:\n> processed = ProcessBarrierPlaceholder();\n> }\n>\n> if (processed)\n> BARRIER_CLEAR_BIT(flags, barrier_type);\n> }\n>\n> But perhaps that's too complicated?\n\nI don't mind a loop, but that one looks broken. We have to clear the\nbit before we call the function that processes that type of barrier.\nOtherwise, if we succeed in absorbing the barrier but a new instance\nof the same barrier arrives meanwhile, we'll fail to realize that we\nneed to absorb the new one.\n\n> For this to be correct, wouldn't flags need to be volatile? Otherwise\n> this might use a register value for flags, which might not contain the\n> correct value at this point.\n\nI think you're right.\n\n> Perhaps a comment explaining why we have to clear bits first would be\n> good?\n\nProbably a good idea.\n\n[ snipping assorted comments with which I agree ]\n\n> It might be good to add a warning to WaitForProcSignalBarrier() or by\n> pss_barrierCheckMask indicating that it's *not* OK to look at\n> pss_barrierCheckMask when checking whether barriers have been processed.\n\nNot sure I understand this one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 16 Sep 2020 15:33:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Sep 15, 2020 at 2:35 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi Andres,\n>\n> The attached patch has fixed the issue that you have raised & I have confirmed\n> in my previous email. Also, I tried to improve some of the things that you have\n> pointed but for those changes, I am a little unsure and looking forward to the\n> inputs/suggestions/confirmation on that, therefore 0003 patch is marked WIP.\n>\n> Please have a look at my inline reply below for the things that are changes in\n> the attached version and need inputs:\n>\n> On Sat, Sep 12, 2020 at 10:52 AM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Thu, Sep 10, 2020 at 2:33 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n[... Skipped ....]\n> > >\n> > >\n> > > > +/*\n> > > > + * RequestWALProhibitChange()\n> > > > + *\n> > > > + * Request checkpointer to make the WALProhibitState to read-only.\n> > > > + */\n> > > > +static void\n> > > > +RequestWALProhibitChange(void)\n> > > > +{\n> > > > + /* Must not be called from checkpointer */\n> > > > + Assert(!AmCheckpointerProcess());\n> > > > + Assert(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS);\n> > > > +\n> > > > + /*\n> > > > + * If in a standalone backend, just do it ourselves.\n> > > > + */\n> > > > + if (!IsPostmasterEnvironment)\n> > > > + {\n> > > > + CompleteWALProhibitChange(GetWALProhibitState());\n> > > > + return;\n> > > > + }\n> > > > +\n> > > > + send_signal_to_checkpointer(SIGINT);\n> > > > +\n> > > > + /* Wait for the state to change to read-only */\n> > > > + ConditionVariablePrepareToSleep(&WALProhibitState->walprohibit_cv);\n> > > > + for (;;)\n> > > > + {\n> > > > + /* We'll be done once in-progress flag bit is cleared */\n> > > > + if (!(GetWALProhibitState() & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > > > + break;\n> > > > +\n> > > > + ConditionVariableSleep(&WALProhibitState->walprohibit_cv,\n> > > > + WAIT_EVENT_WALPROHIBIT_STATE_CHANGE);\n> > > > + }\n> > > > + ConditionVariableCancelSleep();\n> > >\n> > > What if somebody concurrently changes the state back to READ WRITE?\n> > > Won't we unnecessarily wait here?\n> > >\n> >\n> > Yes, there will be wait.\n> >\n> > > That's probably fine, because we would just wait until that transition\n> > > is complete too. But at least a comment about that would be\n> > > good. Alternatively a \"ASRO transitions completed counter\" or such might\n> > > be a better idea?\n> > >\n> >\n> > Ok, will add comments but could you please elaborate little a bit about \"ASRO\n> > transitions completed counter\" and is there any existing counter I can refer\n> > to?\n> >\n\nIn an off-list discussion, Robert had explained to me this counter thing and\nits requirement.\n\nI tried to add the same as \"shared WAL prohibited state generation\" in the\nattached version. The implementation is quite similar to the generation counter\nin the super barrier. In the attached version, when a backend makes a request\nfor the WAL prohibit state changes then a generation number will be given to\nthat backend to wait on and that wait will be ended when the shared generation\ncounter changes.\n\n> > >\n[... Skipped ....]\n> > > > +/*\n> > > > + * SetWALProhibitState()\n> > > > + *\n> > > > + * Change current WAL prohibit state to the input state.\n> > > > + *\n> > > > + * If the server is already completely moved to the requested WAL prohibit\n> > > > + * state, or if the desired state is same as the current state, return false,\n> > > > + * indicating that the server state did not change. Else return true.\n> > > > + */\n> > > > +bool\n> > > > +SetWALProhibitState(uint32 new_state)\n> > > > +{\n> > > > + bool state_updated = false;\n> > > > + uint32 cur_state;\n> > > > +\n> > > > + cur_state = GetWALProhibitState();\n> > > > +\n> > > > + /* Server is already in requested state */\n> > > > + if (new_state == cur_state ||\n> > > > + new_state == (cur_state | WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > > > + return false;\n> > > > +\n> > > > + /* Prevent concurrent contrary in progress transition state setting */\n> > > > + if ((new_state & WALPROHIBIT_TRANSITION_IN_PROGRESS) &&\n> > > > + (cur_state & WALPROHIBIT_TRANSITION_IN_PROGRESS))\n> > > > + {\n> > > > + if (cur_state & WALPROHIBIT_STATE_READ_ONLY)\n> > > > + ereport(ERROR,\n> > > > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > > > + errmsg(\"system state transition to read only is already in progress\"),\n> > > > + errhint(\"Try after sometime again.\")));\n> > > > + else\n> > > > + ereport(ERROR,\n> > > > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > > > + errmsg(\"system state transition to read write is already in progress\"),\n> > > > + errhint(\"Try after sometime again.\")));\n> > > > + }\n> > > > +\n> > > > + /* Update new state in share memory */\n> > > > + state_updated =\n> > > > + pg_atomic_compare_exchange_u32(&WALProhibitState->SharedWALProhibitState,\n> > > > + &cur_state, new_state);\n> > > > +\n> > > > + if (!state_updated)\n> > > > + ereport(ERROR,\n> > > > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > > > + errmsg(\"system read write state concurrently changed\"),\n> > > > + errhint(\"Try after sometime again.\")));\n> > > > +\n> > >\n> > > I don't think it's safe to use pg_atomic_compare_exchange_u32() outside\n> > > of a loop. I think there's platforms (basically all load-linked /\n> > > store-conditional architectures) where than can fail spuriously.\n> > >\n> > > Also, there's no memory barrier around GetWALProhibitState, so there's\n> > > no guarantee it's not an out-of-date value you're starting with.\n> > >\n> >\n> > How about having some kind of lock instead what Robert have suggested\n> > previously[3] ?\n> >\n>\n> I would like to discuss this point more. In the attached version I have added\n> WALProhibitLock to protect shared walprohibit state updates. I was a little\n> unsure do we want another spinlock what XLogCtlData has which is mostly used to\n> read the shared variable and for the update, both are used e.g. LogwrtResult.\n>\n> Right now I haven't added and shared walprohibit state was fetch using a\n> volatile pointer. Do we need a spinlock there, I am not sure why? Thoughts?\n>\n\nI reverted this WALProhibitLock implementation since with changes in the\nattached version I don't think we need that locking.\n\nRegards,\nAmul",
"msg_date": "Wed, 23 Sep 2020 11:34:41 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is a rebased version for the latest master head(#e21cbb4b893).\n\nRegards,\nAmul",
"msg_date": "Mon, 28 Sep 2020 17:59:41 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Sep 16, 2020 at 3:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't mind a loop, but that one looks broken. We have to clear the\n> bit before we call the function that processes that type of barrier.\n> Otherwise, if we succeed in absorbing the barrier but a new instance\n> of the same barrier arrives meanwhile, we'll fail to realize that we\n> need to absorb the new one.\n\nHere's a new version of the patch for allowing errors in\nbarrier-handling functions and/or rejection of barriers by those\nfunctions. I think this responds to all of the previous review\ncomments from Andres. Also, here is an 0002 which is a handy bit of\ntest code that I wrote. It's not for commit, but it is useful for\nfinding bugs.\n\nIn addition to improving 0001 based on the review comments, I also\ntried to write a better commit message for it, but it might still be\npossible to do better there. It's a bit hard to explain the idea in\nthe abstract. For ALTER SYSTEM READ ONLY, the idea is that a process\nwith an XID -- and possibly a bunch of sub-XIDs, and possibly while\nidle-in-transaction -- can elect to FATAL rather than absorbing the\nbarrier. I suspect for other barrier types we might have certain\n(hopefully short) stretches of code where a barrier of a particular\ntype can't be absorbed because we're in the middle of doing something\nthat relies on the previous value of whatever state is protected by\nthe barrier. Holding off interrupts in those stretches of code would\nprevent the barrier from being absorbed, but would also prevent query\ncancel, backend termination, and absorption of other barrier types, so\nit seems possible that just allowing the barrier-absorption function\nfor a barrier of that type to just refuse the barrier until after the\nbackend exits the critical section of code will work out better.\n\nJust for kicks, I tried running 'make installcheck-parallel' while\nemitting placeholder barriers every 0.05 s after altering the\nbarrier-absorption function to always return false, just to see how\nugly that was. In round figures, it made it take 24 s vs. 21 s, so\nit's actually not that bad. However, it all depends on how many times\nyou hit CHECK_FOR_INTERRUPTS() how quickly, so it's easy to imagine\nthat the effect might be very non-uniform. That is, if you can get the\ncode to be running a tight loop that does little real work but does\nCHECK_FOR_INTERRUPTS() while refusing to absorb outstanding type of\nbarrier, it will probably suck. Therefore, I'm inclined to think that\nthe fairly strong cautionary logic in the patch is reasonable, but\nperhaps it can be better worded somehow. Thoughts welcome.\n\nI have not rebased the remainder of the patch series over these two.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 7 Oct 2020 13:48:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Oct 7, 2020 at 11:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Sep 16, 2020 at 3:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I don't mind a loop, but that one looks broken. We have to clear the\n> > bit before we call the function that processes that type of barrier.\n> > Otherwise, if we succeed in absorbing the barrier but a new instance\n> > of the same barrier arrives meanwhile, we'll fail to realize that we\n> > need to absorb the new one.\n>\n> Here's a new version of the patch for allowing errors in\n> barrier-handling functions and/or rejection of barriers by those\n> functions. I think this responds to all of the previous review\n> comments from Andres. Also, here is an 0002 which is a handy bit of\n> test code that I wrote. It's not for commit, but it is useful for\n> finding bugs.\n>\n> In addition to improving 0001 based on the review comments, I also\n> tried to write a better commit message for it, but it might still be\n> possible to do better there. It's a bit hard to explain the idea in\n> the abstract. For ALTER SYSTEM READ ONLY, the idea is that a process\n> with an XID -- and possibly a bunch of sub-XIDs, and possibly while\n> idle-in-transaction -- can elect to FATAL rather than absorbing the\n> barrier. I suspect for other barrier types we might have certain\n> (hopefully short) stretches of code where a barrier of a particular\n> type can't be absorbed because we're in the middle of doing something\n> that relies on the previous value of whatever state is protected by\n> the barrier. Holding off interrupts in those stretches of code would\n> prevent the barrier from being absorbed, but would also prevent query\n> cancel, backend termination, and absorption of other barrier types, so\n> it seems possible that just allowing the barrier-absorption function\n> for a barrier of that type to just refuse the barrier until after the\n> backend exits the critical section of code will work out better.\n>\n> Just for kicks, I tried running 'make installcheck-parallel' while\n> emitting placeholder barriers every 0.05 s after altering the\n> barrier-absorption function to always return false, just to see how\n> ugly that was. In round figures, it made it take 24 s vs. 21 s, so\n> it's actually not that bad. However, it all depends on how many times\n> you hit CHECK_FOR_INTERRUPTS() how quickly, so it's easy to imagine\n> that the effect might be very non-uniform. That is, if you can get the\n> code to be running a tight loop that does little real work but does\n> CHECK_FOR_INTERRUPTS() while refusing to absorb outstanding type of\n> barrier, it will probably suck. Therefore, I'm inclined to think that\n> the fairly strong cautionary logic in the patch is reasonable, but\n> perhaps it can be better worded somehow. Thoughts welcome.\n>\n> I have not rebased the remainder of the patch series over these two.\n>\nThat I'll do.\n\nOn a quick look at the latest 0001 patch, the following hunk to reset leftover\nflags seems to be unnecessary:\n\n+ /*\n+ * If some barrier types were not successfully absorbed, we will have\n+ * to try again later.\n+ */\n+ if (!success)\n+ {\n+ ResetProcSignalBarrierBits(flags);\n+ return;\n+ }\n\nWhen the ProcessBarrierPlaceholder() function returns false without an error,\nthat barrier flag gets reset within the while loop. The case when it has an\nerror, the rest of the flags get reset in the catch block. Correct me if I am\nmissing something here.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 8 Oct 2020 15:52:58 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Oct 8, 2020 at 3:52 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Oct 7, 2020 at 11:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Sep 16, 2020 at 3:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I don't mind a loop, but that one looks broken. We have to clear the\n> > > bit before we call the function that processes that type of barrier.\n> > > Otherwise, if we succeed in absorbing the barrier but a new instance\n> > > of the same barrier arrives meanwhile, we'll fail to realize that we\n> > > need to absorb the new one.\n> >\n> > Here's a new version of the patch for allowing errors in\n> > barrier-handling functions and/or rejection of barriers by those\n> > functions. I think this responds to all of the previous review\n> > comments from Andres. Also, here is an 0002 which is a handy bit of\n> > test code that I wrote. It's not for commit, but it is useful for\n> > finding bugs.\n> >\n> > In addition to improving 0001 based on the review comments, I also\n> > tried to write a better commit message for it, but it might still be\n> > possible to do better there. It's a bit hard to explain the idea in\n> > the abstract. For ALTER SYSTEM READ ONLY, the idea is that a process\n> > with an XID -- and possibly a bunch of sub-XIDs, and possibly while\n> > idle-in-transaction -- can elect to FATAL rather than absorbing the\n> > barrier. I suspect for other barrier types we might have certain\n> > (hopefully short) stretches of code where a barrier of a particular\n> > type can't be absorbed because we're in the middle of doing something\n> > that relies on the previous value of whatever state is protected by\n> > the barrier. Holding off interrupts in those stretches of code would\n> > prevent the barrier from being absorbed, but would also prevent query\n> > cancel, backend termination, and absorption of other barrier types, so\n> > it seems possible that just allowing the barrier-absorption function\n> > for a barrier of that type to just refuse the barrier until after the\n> > backend exits the critical section of code will work out better.\n> >\n> > Just for kicks, I tried running 'make installcheck-parallel' while\n> > emitting placeholder barriers every 0.05 s after altering the\n> > barrier-absorption function to always return false, just to see how\n> > ugly that was. In round figures, it made it take 24 s vs. 21 s, so\n> > it's actually not that bad. However, it all depends on how many times\n> > you hit CHECK_FOR_INTERRUPTS() how quickly, so it's easy to imagine\n> > that the effect might be very non-uniform. That is, if you can get the\n> > code to be running a tight loop that does little real work but does\n> > CHECK_FOR_INTERRUPTS() while refusing to absorb outstanding type of\n> > barrier, it will probably suck. Therefore, I'm inclined to think that\n> > the fairly strong cautionary logic in the patch is reasonable, but\n> > perhaps it can be better worded somehow. Thoughts welcome.\n> >\n> > I have not rebased the remainder of the patch series over these two.\n> >\n> That I'll do.\n>\n\nAttaching a rebased version includes Robert's patches for the latest master\nhead.\n\n> On a quick look at the latest 0001 patch, the following hunk to reset leftover\n> flags seems to be unnecessary:\n>\n> + /*\n> + * If some barrier types were not successfully absorbed, we will have\n> + * to try again later.\n> + */\n> + if (!success)\n> + {\n> + ResetProcSignalBarrierBits(flags);\n> + return;\n> + }\n>\n> When the ProcessBarrierPlaceholder() function returns false without an error,\n> that barrier flag gets reset within the while loop. The case when it has an\n> error, the rest of the flags get reset in the catch block. Correct me if I am\n> missing something here.\n>\n\nRobert, could you please confirm this?\n\nRegards,\nAmul",
"msg_date": "Wed, 28 Oct 2020 17:13:38 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Oct 8, 2020 at 6:23 AM Amul Sul <sulamul@gmail.com> wrote:\n> On a quick look at the latest 0001 patch, the following hunk to reset leftover\n> flags seems to be unnecessary:\n>\n> + /*\n> + * If some barrier types were not successfully absorbed, we will have\n> + * to try again later.\n> + */\n> + if (!success)\n> + {\n> + ResetProcSignalBarrierBits(flags);\n> + return;\n> + }\n>\n> When the ProcessBarrierPlaceholder() function returns false without an error,\n> that barrier flag gets reset within the while loop. The case when it has an\n> error, the rest of the flags get reset in the catch block. Correct me if I am\n> missing something here.\n\nGood catch. I think you're right. Do you want to update accordingly?\n\nAndres, do you like the new loop better?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Nov 2020 11:23:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, 20 Nov 2020 at 9:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Oct 8, 2020 at 6:23 AM Amul Sul <sulamul@gmail.com> wrote:\n> > On a quick look at the latest 0001 patch, the following hunk to reset\n> leftover\n> > flags seems to be unnecessary:\n> >\n> > + /*\n> > + * If some barrier types were not successfully absorbed, we will have\n> > + * to try again later.\n> > + */\n> > + if (!success)\n> > + {\n> > + ResetProcSignalBarrierBits(flags);\n> > + return;\n> > + }\n> >\n> > When the ProcessBarrierPlaceholder() function returns false without an\n> error,\n> > that barrier flag gets reset within the while loop. The case when it\n> has an\n> > error, the rest of the flags get reset in the catch block. Correct me\n> if I am\n> > missing something here.\n>\n> Good catch. I think you're right. Do you want to update accordingly?\n\n\nSure, Ill update that. Thanks for the confirmation.\n\n\n> Andres, do you like the new loop better?\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nOn Fri, 20 Nov 2020 at 9:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Oct 8, 2020 at 6:23 AM Amul Sul <sulamul@gmail.com> wrote:\n> On a quick look at the latest 0001 patch, the following hunk to reset leftover\n> flags seems to be unnecessary:\n>\n> + /*\n> + * If some barrier types were not successfully absorbed, we will have\n> + * to try again later.\n> + */\n> + if (!success)\n> + {\n> + ResetProcSignalBarrierBits(flags);\n> + return;\n> + }\n>\n> When the ProcessBarrierPlaceholder() function returns false without an error,\n> that barrier flag gets reset within the while loop. The case when it has an\n> error, the rest of the flags get reset in the catch block. Correct me if I am\n> missing something here.\n\nGood catch. I think you're right. Do you want to update accordingly?Sure, Ill update that. Thanks for the confirmation.\nAndres, do you like the new loop better?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 20 Nov 2020 23:13:22 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 11:13 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Fri, 20 Nov 2020 at 9:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Thu, Oct 8, 2020 at 6:23 AM Amul Sul <sulamul@gmail.com> wrote:\n>> > On a quick look at the latest 0001 patch, the following hunk to reset leftover\n>> > flags seems to be unnecessary:\n>> >\n>> > + /*\n>> > + * If some barrier types were not successfully absorbed, we will have\n>> > + * to try again later.\n>> > + */\n>> > + if (!success)\n>> > + {\n>> > + ResetProcSignalBarrierBits(flags);\n>> > + return;\n>> > + }\n>> >\n>> > When the ProcessBarrierPlaceholder() function returns false without an error,\n>> > that barrier flag gets reset within the while loop. The case when it has an\n>> > error, the rest of the flags get reset in the catch block. Correct me if I am\n>> > missing something here.\n>>\n>> Good catch. I think you're right. Do you want to update accordingly?\n>\n>\n> Sure, Ill update that. Thanks for the confirmation.\n>\n\nAttached is the updated version where unnecessary ResetProcSignalBarrierBits()\ncall in 0001 patch is removed. The rest of the patches are unchanged, thanks.\n\n>>\n>> Andres, do you like the new loop better?\n>>\n\nRegards,\nAmul",
"msg_date": "Mon, 23 Nov 2020 12:06:45 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Sat, Sep 12, 2020 at 1:23 AM Amul Sul <sulamul@gmail.com> wrote:\n> > So, if we're in the middle of a paced checkpoint with a large\n> > checkpoint_timeout - a sensible real world configuration - we'll not\n> > process ASRO until that checkpoint is over? That seems very much not\n> > practical. What am I missing?\n>\n> Yes, the process doing ASRO will wait until that checkpoint is over.\n\nThat's not good. On a typical busy system, a system is going to be in\nthe middle of a checkpoint most of the time, and the checkpoint will\ntake a long time to finish - maybe minutes. We want this feature to\nrespond within milliseconds or a few seconds, not minutes. So we need\nsomething better here. I'm inclined to think that we should try to\nCompleteWALProhibitChange() at the same places we\nAbsorbSyncRequests(). We know from experience that bad things happen\nif we fail to absorb sync requests in a timely fashion, so we probably\nhave enough calls to AbsorbSyncRequests() to make sure that we always\ndo that work in a timely fashion. So, if we do this work in the same\nplace, then it will also be done in a timely fashion.\n\nI'm not 100% sure whether that introduces any other problems.\nCertainly, we're not going to be able to finish the checkpoint once\nwe've gone read-only, so we'll fail when we try to write the WAL\nrecord for that, or maybe earlier if there's anything else that tries\nto write WAL. Either the checkpoint needs to error out, like any other\nattempt to write WAL, and we can attempt a new checkpoint if and when\nwe go read/write, or else we need to finish writing stuff out to disk\nbut not actually write the checkpoint completion record (or any other\nWAL) unless and until the system goes back into read/write mode - and\nthen at that point the previously-started checkpoint will finish\nnormally. The latter seems better if we can make it work, but the\nformer is probably also acceptable. What you've got right now is not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 9 Dec 2020 16:13:06 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On 2020-11-20 11:23:44 -0500, Robert Haas wrote:\n> Andres, do you like the new loop better?\n\nI do!\n\n\n",
"msg_date": "Wed, 9 Dec 2020 15:16:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-09 16:13:06 -0500, Robert Haas wrote:\n> That's not good. On a typical busy system, a system is going to be in\n> the middle of a checkpoint most of the time, and the checkpoint will\n> take a long time to finish - maybe minutes.\n\nOr hours, even. Due to the cost of FPWs it can make a lot of sense to\nreduce the frequency of that cost...\n\n\n> We want this feature to respond within milliseconds or a few seconds,\n> not minutes. So we need something better here.\n\nIndeed.\n\n\n> I'm inclined to think\n> that we should try to CompleteWALProhibitChange() at the same places\n> we AbsorbSyncRequests(). We know from experience that bad things\n> happen if we fail to absorb sync requests in a timely fashion, so we\n> probably have enough calls to AbsorbSyncRequests() to make sure that\n> we always do that work in a timely fashion. So, if we do this work in\n> the same place, then it will also be done in a timely fashion.\n\nSounds sane, without having looked in detail.\n\n\n> I'm not 100% sure whether that introduces any other problems.\n> Certainly, we're not going to be able to finish the checkpoint once\n> we've gone read-only, so we'll fail when we try to write the WAL\n> record for that, or maybe earlier if there's anything else that tries\n> to write WAL. Either the checkpoint needs to error out, like any other\n> attempt to write WAL, and we can attempt a new checkpoint if and when\n> we go read/write, or else we need to finish writing stuff out to disk\n> but not actually write the checkpoint completion record (or any other\n> WAL) unless and until the system goes back into read/write mode - and\n> then at that point the previously-started checkpoint will finish\n> normally. The latter seems better if we can make it work, but the\n> former is probably also acceptable. What you've got right now is not.\n\nI mostly wonder which of those two has which implications for how many\nFPWs we need to redo. Presumably stalling but not cancelling the current\ncheckpoint is better?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Dec 2020 16:34:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 6:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-12-09 16:13:06 -0500, Robert Haas wrote:\n> > That's not good. On a typical busy system, a system is going to be in\n> > the middle of a checkpoint most of the time, and the checkpoint will\n> > take a long time to finish - maybe minutes.\n>\n> Or hours, even. Due to the cost of FPWs it can make a lot of sense to\n> reduce the frequency of that cost...\n>\n>\n> > We want this feature to respond within milliseconds or a few seconds,\n> > not minutes. So we need something better here.\n>\n> Indeed.\n>\n>\n> > I'm inclined to think\n> > that we should try to CompleteWALProhibitChange() at the same places\n> > we AbsorbSyncRequests(). We know from experience that bad things\n> > happen if we fail to absorb sync requests in a timely fashion, so we\n> > probably have enough calls to AbsorbSyncRequests() to make sure that\n> > we always do that work in a timely fashion. So, if we do this work in\n> > the same place, then it will also be done in a timely fashion.\n>\n> Sounds sane, without having looked in detail.\n>\n\nUnderstood & agreed that we need to change the system state as soon as possible.\n\nI can see AbsorbSyncRequests() is called from 4 routing as\nCheckpointWriteDelay(), ProcessSyncRequests(), SyncPostCheckpoint() and\nCheckpointerMain(). Out of 4, the first three executes with an interrupt is on\nhod which will cause a problem when we do emit barrier and wait for those\nbarriers absorption by all the process including itself and will cause an\ninfinite wait. I think that can be fixed by teaching WaitForProcSignalBarrier(),\ndo not wait on self to absorb barrier. Let that get absorbed at a later point\nin time when the interrupt is resumed. I assumed that we cannot do barrier\nprocessing right away since there could be other barriers (maybe in the future)\nincluding ours that should not process while the interrupt is on hold.\n\n>\n> > I'm not 100% sure whether that introduces any other problems.\n> > Certainly, we're not going to be able to finish the checkpoint once\n> > we've gone read-only, so we'll fail when we try to write the WAL\n> > record for that, or maybe earlier if there's anything else that tries\n> > to write WAL. Either the checkpoint needs to error out, like any other\n> > attempt to write WAL, and we can attempt a new checkpoint if and when\n> > we go read/write, or else we need to finish writing stuff out to disk\n> > but not actually write the checkpoint completion record (or any other\n> > WAL) unless and until the system goes back into read/write mode - and\n> > then at that point the previously-started checkpoint will finish\n> > normally. The latter seems better if we can make it work, but the\n> > former is probably also acceptable. What you've got right now is not.\n>\n> I mostly wonder which of those two has which implications for how many\n> FPWs we need to redo. Presumably stalling but not cancelling the current\n> checkpoint is better?\n>\n\nAlso, I like to uphold this idea of stalling a checkpointer's work in the middle\ninstead of canceling it. But here, we need to take care of shutdown requests and\ndeath of postmaster cases that can cancel this stalling. If that happens we\nneed to make sure that no unwanted wal insertion happens afterward and for that\nLocalXLogInsertAllowed flag needs to be updated correctly since the wal\nprohibits barrier processing was skipped for the checkpointer since it emits\nthat barrier as mentioned above.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Mon, 14 Dec 2020 11:28:38 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 11:28 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Dec 10, 2020 at 6:04 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-12-09 16:13:06 -0500, Robert Haas wrote:\n> > > That's not good. On a typical busy system, a system is going to be in\n> > > the middle of a checkpoint most of the time, and the checkpoint will\n> > > take a long time to finish - maybe minutes.\n> >\n> > Or hours, even. Due to the cost of FPWs it can make a lot of sense to\n> > reduce the frequency of that cost...\n> >\n> >\n> > > We want this feature to respond within milliseconds or a few seconds,\n> > > not minutes. So we need something better here.\n> >\n> > Indeed.\n> >\n> >\n> > > I'm inclined to think\n> > > that we should try to CompleteWALProhibitChange() at the same places\n> > > we AbsorbSyncRequests(). We know from experience that bad things\n> > > happen if we fail to absorb sync requests in a timely fashion, so we\n> > > probably have enough calls to AbsorbSyncRequests() to make sure that\n> > > we always do that work in a timely fashion. So, if we do this work in\n> > > the same place, then it will also be done in a timely fashion.\n> >\n> > Sounds sane, without having looked in detail.\n> >\n>\n> Understood & agreed that we need to change the system state as soon as possible.\n>\n> I can see AbsorbSyncRequests() is called from 4 routing as\n> CheckpointWriteDelay(), ProcessSyncRequests(), SyncPostCheckpoint() and\n> CheckpointerMain(). Out of 4, the first three executes with an interrupt is on\n> hod which will cause a problem when we do emit barrier and wait for those\n> barriers absorption by all the process including itself and will cause an\n> infinite wait. I think that can be fixed by teaching WaitForProcSignalBarrier(),\n> do not wait on self to absorb barrier. Let that get absorbed at a later point\n> in time when the interrupt is resumed. I assumed that we cannot do barrier\n> processing right away since there could be other barriers (maybe in the future)\n> including ours that should not process while the interrupt is on hold.\n>\n\nCreateCheckPoint() holds CheckpointLock LW at start and releases at the end\nwhich puts interrupt on hold. This kinda surprising that we were holding this\nlock and putting interrupt on hots for a long time. We do need that\nCheckpointLock just to ensure that one checkpoint happens at a time. Can't we do\nsomething easy to ensure that instead of the lock? Probably holding off\ninterrupts for so long doesn't seem to be a good idea. Thoughts/Suggestions?\n\nRegards,\nAmul\n\n\n",
"msg_date": "Mon, 14 Dec 2020 20:03:32 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 8:03 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Mon, Dec 14, 2020 at 11:28 AM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Thu, Dec 10, 2020 at 6:04 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2020-12-09 16:13:06 -0500, Robert Haas wrote:\n> > > > That's not good. On a typical busy system, a system is going to be in\n> > > > the middle of a checkpoint most of the time, and the checkpoint will\n> > > > take a long time to finish - maybe minutes.\n> > >\n> > > Or hours, even. Due to the cost of FPWs it can make a lot of sense to\n> > > reduce the frequency of that cost...\n> > >\n> > >\n> > > > We want this feature to respond within milliseconds or a few seconds,\n> > > > not minutes. So we need something better here.\n> > >\n> > > Indeed.\n> > >\n> > >\n> > > > I'm inclined to think\n> > > > that we should try to CompleteWALProhibitChange() at the same places\n> > > > we AbsorbSyncRequests(). We know from experience that bad things\n> > > > happen if we fail to absorb sync requests in a timely fashion, so we\n> > > > probably have enough calls to AbsorbSyncRequests() to make sure that\n> > > > we always do that work in a timely fashion. So, if we do this work in\n> > > > the same place, then it will also be done in a timely fashion.\n> > >\n> > > Sounds sane, without having looked in detail.\n> > >\n> >\n> > Understood & agreed that we need to change the system state as soon as possible.\n> >\n> > I can see AbsorbSyncRequests() is called from 4 routing as\n> > CheckpointWriteDelay(), ProcessSyncRequests(), SyncPostCheckpoint() and\n> > CheckpointerMain(). Out of 4, the first three executes with an interrupt is on\n> > hod which will cause a problem when we do emit barrier and wait for those\n> > barriers absorption by all the process including itself and will cause an\n> > infinite wait. I think that can be fixed by teaching WaitForProcSignalBarrier(),\n> > do not wait on self to absorb barrier. Let that get absorbed at a later point\n> > in time when the interrupt is resumed. I assumed that we cannot do barrier\n> > processing right away since there could be other barriers (maybe in the future)\n> > including ours that should not process while the interrupt is on hold.\n> >\n>\n> CreateCheckPoint() holds CheckpointLock LW at start and releases at the end\n> which puts interrupt on hold. This kinda surprising that we were holding this\n> lock and putting interrupt on hots for a long time. We do need that\n> CheckpointLock just to ensure that one checkpoint happens at a time. Can't we do\n> something easy to ensure that instead of the lock? Probably holding off\n> interrupts for so long doesn't seem to be a good idea. Thoughts/Suggestions?\n>\n\nTo move development, testing, and the review forward, I have commented out the\ncode acquiring CheckpointLock from CreateCheckPoint() in the 0003 patch and\nadded the changes for the checkpointer so that system read-write state change\nrequest can be processed as soon as possible, as suggested by Robert[1].\n\nI have started a new thread[2] to understand the need for the CheckpointLock in\nCreateCheckPoint() function. Until then we can continue work on this feature by\nskipping CheckpointLock in CreateCheckPoint(), and therefore the 0003 patch is\nmarked WIP.\n\n1] http://postgr.es/m/CA+TgmoYexwDQjdd1=15KMz+7VfHVx8VHNL2qjRRK92P=CSZDxg@mail.gmail.com\n2] http://postgr.es/m/CAAJ_b97XnBBfYeSREDJorFsyoD1sHgqnNuCi=02mNQBUMnA=FA@mail.gmail.com\n\nRegards,\nAmul",
"msg_date": "Thu, 14 Jan 2021 16:59:14 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jan 14, 2021 at 6:29 AM Amul Sul <sulamul@gmail.com> wrote:\n> To move development, testing, and the review forward, I have commented out the\n> code acquiring CheckpointLock from CreateCheckPoint() in the 0003 patch and\n> added the changes for the checkpointer so that system read-write state change\n> request can be processed as soon as possible, as suggested by Robert[1].\n>\n> I have started a new thread[2] to understand the need for the CheckpointLock in\n> CreateCheckPoint() function. Until then we can continue work on this feature by\n> skipping CheckpointLock in CreateCheckPoint(), and therefore the 0003 patch is\n> marked WIP.\n\nBased on the favorable review comment from Andres upthread and also\nyour feedback, I committed 0001.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Jan 2021 13:02:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jan 14, 2021 at 6:29 AM Amul Sul <sulamul@gmail.com> wrote:\n> To move development, testing, and the review forward, I have commented out the\n> code acquiring CheckpointLock from CreateCheckPoint() in the 0003 patch and\n> added the changes for the checkpointer so that system read-write state change\n> request can be processed as soon as possible, as suggested by Robert[1].\n\nI am extremely doubtful about SetWALProhibitState()'s claim that \"The\nfinal state can only be requested by the checkpointer or by the\nsingle-user so that there will be no chance that the server is already\nin the desired final state.\" It seems like there is an obvious race\ncondition: CompleteWALProhibitChange() is called with a cur_state_gen\nargument which embeds the last state we saw, but there's nothing to\nkeep it from changing between the time we saw it and the time that\nfunction calls SetWALProhibitState(), is there? We aren't holding any\nlock. It seems to me that SetWALProhibitState() needs to be rewritten\nto avoid this assumption.\n\nOn a related note, SetWALProhibitState() has only two callers. One\npasses is_final_state as true, and the other as false: it's never a\nvariable. The two cases are handled mostly differently. This doesn't\nseem good. A lot of the logic in this function should probably be\nmoved to the calling sites, especially because it's almost certainly\nwrong for this function to be basing what it does on the *current* WAL\nprohibit state rather than the WAL prohibit state that was in effect\nat the time we made the decision to call this function in the first\nplace. As I mentioned in the previous paragraph, that's a built-in\nrace condition. To put that another way, this function should NOT feel\nfree to call GetWALProhibitStateGen().\n\nI don't really see why we should have both an SQL callable function\npg_alter_wal_prohibit_state() and also a DDL command for this. If\nwe're going to go with a functional interface, and I guess the idea of\nthat is to make it so GRANT EXECUTE works, then why not just get rid\nof the DDL?\n\nRequestWALProhibitChange() doesn't look very nice. It seems like it's\nbasically the second half of pg_alter_wal_prohibit_state(), not being\ncalled from anywhere else. It doesn't seem to add anything to separate\nit out like this; the interface between the two is not especially\nclean.\n\nIt seems odd that ProcessWALProhibitStateChangeRequest() returns\nwithout doing anything if !AmCheckpointerProcess(), rather than having\nthat be an Assert(). Why is it like that?\n\nI think WALProhibitStateShmemInit() would probably look more similar\nto other functions if it did if (found) { stuff; } rather than if\n(!found) return; stuff; -- but I might be wrong about the existing\nprecedent.\n\nThe SetLastCheckPointSkipped() and LastCheckPointIsSkipped() stuff\nseems confusingly-named, because we have other reasons for skipping a\ncheckpoint that are not what we're talking about here. I think this is\ntalking about whether we've performed a checkpoint after recovery, and\nthe naming should reflect that. But I think there's something else\nwrong with the design, too: why is this protected by a spinlock? I\nhave questions in both directions. On the one hand, I wonder why we\nneed any kind of lock at all. On the other hand, if we do need a lock,\nI wonder why a spinlock that protects only the setting and clearing of\nthe flag and nothing else is sufficient. There are zero comments\nexplaining what the idea behind this locking regime is, and I can't\nunderstand why it should be correct.\n\nIn fact, I think this area needs a broader rethink. Like, the way you\nintegrated that stuff into StartupXLog(), it sure looks to me like we\nmight skip the checkpoint but still try to write other WAL records.\nBefore we reach the offending segment of code, we call\nUpdateFullPageWrites(). Afterwards, we call XLogReportParameters().\nBoth of those are going to potentially write WAL. I guess you could\nargue that's OK, on the grounds that neither function is necessarily\ngoing to log anything, but I don't think I believe that. If I make my\nserver read only, take the OS down, change some GUCs, and then start\nit again, I don't expect it to PANIC.\n\nAlso, I doubt that it's OK to skip the checkpoint as this code does\nand then go ahead and execute recovery_end_command and update the\ncontrol file anyway. It sure looks like the existing code is written\nwith the assumption that the checkpoint happens before those other\nthings. One idea I just had was: suppose that, if the system is READ\nONLY, we don't actually exit recovery right away, and the startup\nprocess doesn't exit. Instead we just sit there and wait for the\nsystem to be made read-write again before doing anything else. But\nthen if hot_standby=false, there's no way for someone to execute a\nALTER SYSTEM READ WRITE and/or pg_alter_wal_prohibit_state(), which\nseems bad. So perhaps we need to let in regular connections *as if*\nthe system were read-write while postponing not just the\nend-of-recovery checkpoint but also the other associated things like\nUpdateFullPageWrites(), XLogReportParameters(), recovery_end_command,\ncontrol file update, etc. until the end of recovery. Or maybe that's\nnot the right idea either, but regardless of what we do here it needs\nclear comments justifying it. The current version of the patch does\nnot have any.\n\nI think that you've mis-positioned the check in autovacuum.c. Note\nthat the comment right afterwards says: \"a worker finished, or\npostmaster signaled failure to start a worker\". Those are things we\nshould still check for even when the system is R/O. What we don't want\nto do in that case is start new workers. I would suggest revising the\ncomment that starts with \"There are some conditions that...\" to\nmention three conditions. The new one would be that the system is in a\nread-only state. I'd mention that first, making the existing ones #2\nand #3, and then add the code to \"continue;\" in that case right after\nthat comment, before setting current_time.\n\nSendsSignalToCheckpointer() has multiple problems. As far as the name,\nit should at least be \"Send\" rather than \"Sends\" but the corresponding\nfunctions elsewhere have names like SendPostmasterSignal() not\nSendSignalToPostmaster(). Also, why is it OK for it to use elog()\nrather than ereport()? Also, why is it an error if the checkpointer's\nnot running, rather than just having the next checkpointer do it when\nit's relaunched? Also, why pass SIGINT as an argument if there's only\none caller? A related thing that's also odd is that sending SIGINT\ncalls ReqCheckpointHandler() not anything specific to prohibiting WAL.\nThat is probably OK because that function now just sets the latch. But\nthen we could stop sending SIGINT to the checkpointer at all and just\nsend SIGUSR1, which would also set the latch, without using up a\nsignal. I wonder if we should make that change as a separate\npreparatory patch. It seems like that would clear things up; it would\nremove the oddity that this patch is invoking a handler called\nReqCheckpointerHandler() with no intention of requesting a checkpoint,\nbecause ReqCheckpointerHandler() would be gone. That problem could\nalso be fixed by renaming ReqCheckpointerHandler() to something\nclearer, but that seems inferior.\n\nThis is probably not a complete list of problems. Review from others\nwould be appreciated.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Jan 2021 15:45:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 2:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jan 14, 2021 at 6:29 AM Amul Sul <sulamul@gmail.com> wrote:\n> > To move development, testing, and the review forward, I have commented out the\n> > code acquiring CheckpointLock from CreateCheckPoint() in the 0003 patch and\n> > added the changes for the checkpointer so that system read-write state change\n> > request can be processed as soon as possible, as suggested by Robert[1].\n>\n> I am extremely doubtful about SetWALProhibitState()'s claim that \"The\n> final state can only be requested by the checkpointer or by the\n> single-user so that there will be no chance that the server is already\n> in the desired final state.\" It seems like there is an obvious race\n> condition: CompleteWALProhibitChange() is called with a cur_state_gen\n> argument which embeds the last state we saw, but there's nothing to\n> keep it from changing between the time we saw it and the time that\n> function calls SetWALProhibitState(), is there? We aren't holding any\n> lock. It seems to me that SetWALProhibitState() needs to be rewritten\n> to avoid this assumption.\n>\n\nIt is not like that, let me explain. When a user backend requests to alter WAL\nprohibit state by using ASRO/ASRW DDL with the previous patch or calling\npg_alter_wal_prohibit_state() then WAL prohibit state in shared memory will be\nset to the transition state i.e. going-read-only or going-read-write if it is\nnot already. If another backend trying to request the same alteration to the\nwal prohibit state then nothing going to be changed in shared memory but that\nbackend needs to wait until the transition to the final wal prohibited state\ncompletes. If a backend tries to request for the opposite state than the\nprevious which is in progress then it will see an error as \"system state\ntransition to read only/write is already in progress\". At a time only one\ntransition state can be set.\n\nFor the case where transition state changes to the complete states i.e.\nread-only/read-write that can only be changed by the checkpointer or standalone\nbackend, there won't be any concurrency to change transition state to complete\nstate.\n\n> On a related note, SetWALProhibitState() has only two callers. One\n> passes is_final_state as true, and the other as false: it's never a\n> variable. The two cases are handled mostly differently. This doesn't\n> seem good. A lot of the logic in this function should probably be\n> moved to the calling sites, especially because it's almost certainly\n> wrong for this function to be basing what it does on the *current* WAL\n> prohibit state rather than the WAL prohibit state that was in effect\n> at the time we made the decision to call this function in the first\n> place. As I mentioned in the previous paragraph, that's a built-in\n> race condition. To put that another way, this function should NOT feel\n> free to call GetWALProhibitStateGen().\n>\n\nUnderstood. I have removed SetWALProhibitState() and moved the respective code\nto the caller in the attached version.\n\n> I don't really see why we should have both an SQL callable function\n> pg_alter_wal_prohibit_state() and also a DDL command for this. If\n> we're going to go with a functional interface, and I guess the idea of\n> that is to make it so GRANT EXECUTE works, then why not just get rid\n> of the DDL?\n>\n\nOk, dropped the patch of the DDL command. If in the future we want it back, I\ncan add that again.\n\nNow, I am a little bit concerned about the current function name. How about\npg_set_wal_prohibit_state(bool) name or have two functions as\npg_set_wal_prohibit_state(void) and pg_unset_wal_prohibit_state(void) or any\nother suggestions?\n\n> RequestWALProhibitChange() doesn't look very nice. It seems like it's\n> basically the second half of pg_alter_wal_prohibit_state(), not being\n> called from anywhere else. It doesn't seem to add anything to separate\n> it out like this; the interface between the two is not especially\n> clean.\n>\n\nOk, moved that code in pg_alter_wal_prohibit_state() in the attached version.\n\n> It seems odd that ProcessWALProhibitStateChangeRequest() returns\n> without doing anything if !AmCheckpointerProcess(), rather than having\n> that be an Assert(). Why is it like that?\n>\n\nLike AbsorbSyncRequests().\n\n> I think WALProhibitStateShmemInit() would probably look more similar\n> to other functions if it did if (found) { stuff; } rather than if\n> (!found) return; stuff; -- but I might be wrong about the existing\n> precedent.\n>\n\nOk, did the same in the attached version.\n\n> The SetLastCheckPointSkipped() and LastCheckPointIsSkipped() stuff\n> seems confusingly-named, because we have other reasons for skipping a\n> checkpoint that are not what we're talking about here. I think this is\n> talking about whether we've performed a checkpoint after recovery, and\n> the naming should reflect that. But I think there's something else\n> wrong with the design, too: why is this protected by a spinlock? I\n> have questions in both directions. On the one hand, I wonder why we\n> need any kind of lock at all. On the other hand, if we do need a lock,\n> I wonder why a spinlock that protects only the setting and clearing of\n> the flag and nothing else is sufficient. There are zero comments\n> explaining what the idea behind this locking regime is, and I can't\n> understand why it should be correct.\n>\n\nRenamed those functions to SetRecoveryCheckpointSkippedFlag() and\nRecoveryCheckpointIsSkipped() respectively and remove the lock which is not\nneeded. Updated comment for lastRecoveryCheckpointSkipped variable for the lock\nrequirement.\n\n> In fact, I think this area needs a broader rethink. Like, the way you\n> integrated that stuff into StartupXLog(), it sure looks to me like we\n> might skip the checkpoint but still try to write other WAL records.\n> Before we reach the offending segment of code, we call\n> UpdateFullPageWrites(). Afterwards, we call XLogReportParameters().\n> Both of those are going to potentially write WAL. I guess you could\n> argue that's OK, on the grounds that neither function is necessarily\n> going to log anything, but I don't think I believe that. If I make my\n> server read only, take the OS down, change some GUCs, and then start\n> it again, I don't expect it to PANIC.\n>\n\nIf you think that there will be panic when UpdateFullPageWrites() and/or\nXLogReportParameters() tries to write WAL since the shared memory state for WAL\nprohibited is set then it is not like that. For those functions, WAL write is\nexplicitly enabled by calling LocalSetXLogInsertAllowed().\n\nI was under the impression that there won't be any problem if we allow the\nwriting WAL to UpdateFullPageWrites() and XLogReportParameters(). It can be\nconsidered as an exception since it is fine that this WAL record is not streamed\nto standby while graceful failover, I may be wrong though.\n\n> Also, I doubt that it's OK to skip the checkpoint as this code does\n> and then go ahead and execute recovery_end_command and update the\n> control file anyway. It sure looks like the existing code is written\n> with the assumption that the checkpoint happens before those other\n> things.\n\nHmm, here we could go wrong. I need to look at this part carefully.\n\n> One idea I just had was: suppose that, if the system is READ\n> ONLY, we don't actually exit recovery right away, and the startup\n> process doesn't exit. Instead we just sit there and wait for the\n> system to be made read-write again before doing anything else. But\n> then if hot_standby=false, there's no way for someone to execute a\n> ALTER SYSTEM READ WRITE and/or pg_alter_wal_prohibit_state(), which\n> seems bad. So perhaps we need to let in regular connections *as if*\n> the system were read-write while postponing not just the\n> end-of-recovery checkpoint but also the other associated things like\n> UpdateFullPageWrites(), XLogReportParameters(), recovery_end_command,\n> control file update, etc. until the end of recovery. Or maybe that's\n> not the right idea either, but regardless of what we do here it needs\n> clear comments justifying it. The current version of the patch does\n> not have any.\n>\n\nWill get back to you on this. Let me think more on this and the previous\npoint.\n\n> I think that you've mis-positioned the check in autovacuum.c. Note\n> that the comment right afterwards says: \"a worker finished, or\n> postmaster signaled failure to start a worker\". Those are things we\n> should still check for even when the system is R/O. What we don't want\n> to do in that case is start new workers. I would suggest revising the\n> comment that starts with \"There are some conditions that...\" to\n> mention three conditions. The new one would be that the system is in a\n> read-only state. I'd mention that first, making the existing ones #2\n> and #3, and then add the code to \"continue;\" in that case right after\n> that comment, before setting current_time.\n>\n\nDone.\n\n> SendsSignalToCheckpointer() has multiple problems. As far as the name,\n> it should at least be \"Send\" rather than \"Sends\" but the corresponding\n\n\"Sends\" is unacceptable, it is a typo.\n\n> functions elsewhere have names like SendPostmasterSignal() not\n> SendSignalToPostmaster(). Also, why is it OK for it to use elog()\n> rather than ereport()? Also, why is it an error if the checkpointer's\n> not running, rather than just having the next checkpointer do it when\n> it's relaunched?\n\nOk, now the function only returns true or false. It's up to the caller what to\ndo with that. In our case, the caller will issue a warning only. If you want\nthis could be a NOTICE as well.\n\n> Also, why pass SIGINT as an argument if there's only\n> one caller?\n\nI thoughts, anybody can also reuse it to send some other signal to the\ncheckpointer process in the future.\n\n> A related thing that's also odd is that sending SIGINT\n> calls ReqCheckpointHandler() not anything specific to prohibiting WAL.\n> That is probably OK because that function now just sets the latch. But\n> then we could stop sending SIGINT to the checkpointer at all and just\n> send SIGUSR1, which would also set the latch, without using up a\n> signal. I wonder if we should make that change as a separate\n> preparatory patch. It seems like that would clear things up; it would\n> remove the oddity that this patch is invoking a handler called\n> ReqCheckpointerHandler() with no intention of requesting a checkpoint,\n> because ReqCheckpointerHandler() would be gone. That problem could\n> also be fixed by renaming ReqCheckpointerHandler() to something\n> clearer, but that seems inferior.\n>\n\nI am not clear on this part. In the attached version I am sending SIGUSR1\ninstead of SIGINT, which works for me.\n\n> This is probably not a complete list of problems. Review from others\n> would be appreciated.\n>\n\nThanks a lot.\n\nThe attached version does not address all your comments, I'll continue my work\non that.\n\nRegards,\nAmul",
"msg_date": "Thu, 21 Jan 2021 20:16:46 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 9:47 AM Amul Sul <sulamul@gmail.com> wrote:\n> It is not like that, let me explain. When a user backend requests to alter WAL\n> prohibit state by using ASRO/ASRW DDL with the previous patch or calling\n> pg_alter_wal_prohibit_state() then WAL prohibit state in shared memory will be\n> set to the transition state i.e. going-read-only or going-read-write if it is\n> not already. If another backend trying to request the same alteration to the\n> wal prohibit state then nothing going to be changed in shared memory but that\n> backend needs to wait until the transition to the final wal prohibited state\n> completes. If a backend tries to request for the opposite state than the\n> previous which is in progress then it will see an error as \"system state\n> transition to read only/write is already in progress\". At a time only one\n> transition state can be set.\n\nHrm. Well, then that needs to be abundantly clear in the relevant comments.\n\n> Now, I am a little bit concerned about the current function name. How about\n> pg_set_wal_prohibit_state(bool) name or have two functions as\n> pg_set_wal_prohibit_state(void) and pg_unset_wal_prohibit_state(void) or any\n> other suggestions?\n\nHow about pg_prohibit_wal(true|false)?\n\n> > It seems odd that ProcessWALProhibitStateChangeRequest() returns\n> > without doing anything if !AmCheckpointerProcess(), rather than having\n> > that be an Assert(). Why is it like that?\n>\n> Like AbsorbSyncRequests().\n\nWell, that can be called not from the checkpointer, according to the\ncomments. Specifically from the postmaster, I guess. Again, comments\nplease.\n\n> If you think that there will be panic when UpdateFullPageWrites() and/or\n> XLogReportParameters() tries to write WAL since the shared memory state for WAL\n> prohibited is set then it is not like that. For those functions, WAL write is\n> explicitly enabled by calling LocalSetXLogInsertAllowed().\n>\n> I was under the impression that there won't be any problem if we allow the\n> writing WAL to UpdateFullPageWrites() and XLogReportParameters(). It can be\n> considered as an exception since it is fine that this WAL record is not streamed\n> to standby while graceful failover, I may be wrong though.\n\nI don't think that's OK. I mean, the purpose of the feature is to\nprohibit WAL. If it doesn't do that, I believe it will fail to satisfy\nthe principle of least surprise.\n\n> I am not clear on this part. In the attached version I am sending SIGUSR1\n> instead of SIGINT, which works for me.\n\nOK.\n\n> The attached version does not address all your comments, I'll continue my work\n> on that.\n\nSome thoughts on this version:\n\n+/* Extract last two bits */\n+#define WALPROHIBIT_CURRENT_STATE(stateGeneration) \\\n+ ((uint32)(stateGeneration) & ((uint32) ((1 << 2) - 1)))\n+#define WALPROHIBIT_NEXT_STATE(stateGeneration) \\\n+ WALPROHIBIT_CURRENT_STATE((stateGeneration + 1))\n\nThis is really confusing. First, the comment looks like it applies to\nboth based on how it is positioned, but that's clearly not true.\nSecond, the naming is really hard to understand. Third, there don't\nseem to be comments explaining the theory of what is going on here.\nFourth, stateGeneration refers not to which generation of state we've\ngot here but to the combination of the state and the generation.\nHowever, it's not clear that we ever really use the generation for\nanything.\n\nI think that the direction you went with this is somewhat different\nfrom what I had in mind. That may be OK, but let me just explain the\ndifference. We both had in mind the idea that the low two bits of the\nstate would represent the current state and the upper bits would\nrepresent the state generation. However, I wasn't necessarily\nimagining that the only supported operation was making the combined\nvalue go up by 1. For instance, I had thought that perhaps the effect\nof trying to go read-only when we're in the middle of going read-write\nwould be to cancel the previous operation and start the new one. What\nyou have instead is that it errors out. So in your model a change\nalways has to finish before the next one can start, which in turn\nmeans that the sequence is completely linear. In my idea the\nstate+generation might go from say 1 to 7, because trying to go\nread-write would cancel the previous attempt to go read-only and\nreplace it with an attempt to go the other direction, and from 7 we\nmight go to to 9 if somebody now tries to go read-only again before\nthat finishes. In your model, there's never any sort of cancellation\nof that kind, so you can only go 0->1->2->3->4->5->6->7->8->9 etc.\n\nOne disadvantage of the way you've got it from a user perspective is\nthat if I'm writing a tool, I might get an error telling me that the\nstate change I'm trying to make is already in progress, and then I\nhave to retry. With the other design, I might attempt a state change\nand have it fail because the change can't be completed, but I won't\never fail because I attempt a state change and it can't be started\nbecause we're in the wrong starting state. So, with this design, as\nthe tool author, I may not be able to just say, well, I tried to\nchange the state and it didn't work, so report the error to the user.\nI think with the other approach that would be more viable. But I might\nbe wrong here; it would be interesting to hear what other people\nthink.\n\nI dislike the use of the term state_gen or StateGen to refer to the\ncombination of a state and a generation. That seems unintuitive. I'm\ntempted to propose that we just call it a counter, and, assuming we\nstick with the design as you now have it, explain it with a comment\nlike this in walprohibit.h:\n\n\"There are four possible states. A brand new database cluster is\nalways initially WALPROHIBIT_STATE_READ_WRITE. If the user tries to\nmake it read only, then we enter the state\nWALPROHIBIT_STATE_GOING_READ_ONLY. When the transition is complete, we\nenter the state WALPROHIBIT_STATE_READ_ONLY. If the user subsequently\ntries to make it read write, we will enter the state\nWALPROHIBIT_STATE_GOING_READ_WRITE. When that transition is complete,\nwe will enter the state WALPROHIBIT_STATE_READ_WRITE. These four state\ntransitions are the only ones possible; for example, if we're\ncurrently in state WALPROHIBIT_STATE_GOING_READ_ONLY, an attempt to go\nread-write will produce an error, and a second attempt to go read-only\nwill not cause a state change. Thus, we can represent the state as a\nshared-memory counter that whose value only ever changes by adding 1.\nThe initial value at postmaster startup is either 0 or 2, depending on\nwhether the control file specifies the the system is starting\nread-only or read-write.\"\n\nAnd then maybe change all the state_gen references to reference\nwal_prohibit_counter or, where a shorter name is appropriate, counter.\n\nI think this might be clearer if we used different data types for the\nstate and the state/generation combination, with functions to convert\nbetween them. e.g. instead of define WALPROHIBIT_STATE_READ_WRITE 0\netc. maybe do:\n\ntypedef enum { ... = 0, ... = 1, ... = 2, ... = 3 } WALProhibitState;\n\nAnd then instead of WALPROHIBIT_CURRENT_STATE perhaps something like:\n\nstatic inline WALProhibitState\nGetWALProhibitState(uint32 wal_prohibit_counter)\n{\n return (WALProhibitState) (wal_prohibit_counter & 3);\n}\n\nI don't really know why we need WALPROHIBIT_NEXT_STATE at all,\nhonestly. It's just a macro to add 1 to an integer. And you don't even\nuse it consistently. Like pg_alter_wal_prohibit_state() does this:\n\n+ /* Server is already in requested state */\n+ if (WALPROHIBIT_NEXT_STATE(new_transition_state) == cur_state)\n+ PG_RETURN_VOID();\n\nBut then later does this:\n\n+ next_state_gen = cur_state_gen + 1;\n\nWhich is exactly the same thing as what you computed above using\nWALPROHIBIT_NEXT_STATE() but spelled differently. I am not exactly\nsure how to structure this to make it as simple as possible, but I\ndon't think this is it.\n\nHonestly this whole logic here seems correct but a bit hard to follow.\nLike, maybe:\n\nwal_prohibit_counter = pg_atomic_read_u32(&WALProhibitState->shared_counter);\nswitch (GetWALProhibitState(wal_prohibit_counter))\n{\ncase WALPROHIBIT_STATE_READ_WRITE:\nif (!walprohibit) return;\nincrement = true;\nbreak;\ncase WALPROHIBIT_STATE_GOING_READ_WRITE:\nif (walprohibit) ereport(ERROR, ...);\nbreak;\n...\n}\n\nAnd then just:\n\nif (increment)\n wal_prohibit_counter =\npg_atomic_add_fetch_u32(&WALProhibitState->shared_counter, 1);\ntarget_counter_value = wal_prohibit_counter + 1;\n// random stuff\n// eventually wait until the counter reaches >= target_counter_value\n\nThis might not be exactly the right idea though. I'm just looking for\na way to make it clearer, because I find it a bit hard to understand\nright now. Maybe you or someone else will have a better idea.\n\n+ success =\npg_atomic_compare_exchange_u32(&WALProhibitState->shared_state_generation,\n+\n &cur_state_gen, next_state_gen);\n+ Assert(success);\n\nI am almost positive that this is not OK. I think on some platforms\natomics just randomly fail some percentage of the time. You always\nneed a retry loop. Anyway, what happens if two people enter this\nfunction at the same time and both read the same starting counter\nvalue before either does anything?\n\n+ /* To be sure that any later reads of memory happen\nstrictly after this. */\n+ pg_memory_barrier();\n\nYou don't need a memory barrier after use of an atomic. The atomic\nincludes a barrier.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Jan 2021 16:07:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Jan 26, 2021 at 2:38 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jan 21, 2021 at 9:47 AM Amul Sul <sulamul@gmail.com> wrote:\n> > It is not like that, let me explain. When a user backend requests to alter WAL\n> > prohibit state by using ASRO/ASRW DDL with the previous patch or calling\n> > pg_alter_wal_prohibit_state() then WAL prohibit state in shared memory will be\n> > set to the transition state i.e. going-read-only or going-read-write if it is\n> > not already. If another backend trying to request the same alteration to the\n> > wal prohibit state then nothing going to be changed in shared memory but that\n> > backend needs to wait until the transition to the final wal prohibited state\n> > completes. If a backend tries to request for the opposite state than the\n> > previous which is in progress then it will see an error as \"system state\n> > transition to read only/write is already in progress\". At a time only one\n> > transition state can be set.\n>\n> Hrm. Well, then that needs to be abundantly clear in the relevant comments.\n>\n> > Now, I am a little bit concerned about the current function name. How about\n> > pg_set_wal_prohibit_state(bool) name or have two functions as\n> > pg_set_wal_prohibit_state(void) and pg_unset_wal_prohibit_state(void) or any\n> > other suggestions?\n>\n> How about pg_prohibit_wal(true|false)?\n>\n\nLGTM. Used this.\n\n> > > It seems odd that ProcessWALProhibitStateChangeRequest() returns\n> > > without doing anything if !AmCheckpointerProcess(), rather than having\n> > > that be an Assert(). Why is it like that?\n> >\n> > Like AbsorbSyncRequests().\n>\n> Well, that can be called not from the checkpointer, according to the\n> comments. Specifically from the postmaster, I guess. Again, comments\n> please.\n>\n\nDone.\n\n> > If you think that there will be panic when UpdateFullPageWrites() and/or\n> > XLogReportParameters() tries to write WAL since the shared memory state for WAL\n> > prohibited is set then it is not like that. For those functions, WAL write is\n> > explicitly enabled by calling LocalSetXLogInsertAllowed().\n> >\n> > I was under the impression that there won't be any problem if we allow the\n> > writing WAL to UpdateFullPageWrites() and XLogReportParameters(). It can be\n> > considered as an exception since it is fine that this WAL record is not streamed\n> > to standby while graceful failover, I may be wrong though.\n>\n> I don't think that's OK. I mean, the purpose of the feature is to\n> prohibit WAL. If it doesn't do that, I believe it will fail to satisfy\n> the principle of least surprise.\n>\n\nYes, you are correct.\n\nI am still on this. The things that worried me here are the wal records sequence\nbeing written in the startup process -- UpdateFullPageWrites() generate record\njust before the recovery check-point record and XLogReportParameters() just\nafter that but before any other backend could write any wal record. We might\nalso need to follow the same sequence while changing the system to read-write.\n\nBut in our case maintaining this sequence seems to be a little difficult. let me\nexplain, when a backend executes a function (ie. pg_prohibit_wal(false)) to\nmake the system read-write then that system state changes will be conveyed by\nthe Checkpointer process to all existing backends using global barrier and then\ncheckpoint might want to write those records. While checkpoint in progress, few\nexisting backends who might have absorbed barriers can write new records that\nmight come before aforesaid wal record sequence to be written. Also, we might\nthink that we could write these records before emitting the super barrier which\nalso might not solve the problem because a new backend could connect the server\njust after the read-write system state change request was made but before\nCheckpointer could pick that. Such a backend could write WAL before the\nCheckpointer could, (see IsWALProhibited()).\n\nApart from this I also had a thought on the point recovery_end_command execution\nthat happens just after the recovery end checkpoint in the Startup process.\nI think, first of all, why should we go and execute this command if we are\nread-only? I don't think there will be any use to boot-up a read-only server\nas standby, which itself is read-only to some extent. Also, pg_basebackup from\nread-only is not allowed, a new standby cannot be set up. I think,\nIMHO, we should simply error-out if tried to boot-up read-only server as\nstandby using standby.signal file, thoughts?\n\n> > I am not clear on this part. In the attached version I am sending SIGUSR1\n> > instead of SIGINT, which works for me.\n>\n> OK.\n>\n> > The attached version does not address all your comments, I'll continue my work\n> > on that.\n>\n> Some thoughts on this version:\n>\n> +/* Extract last two bits */\n> +#define WALPROHIBIT_CURRENT_STATE(stateGeneration) \\\n> + ((uint32)(stateGeneration) & ((uint32) ((1 << 2) - 1)))\n> +#define WALPROHIBIT_NEXT_STATE(stateGeneration) \\\n> + WALPROHIBIT_CURRENT_STATE((stateGeneration + 1))\n>\n> This is really confusing. First, the comment looks like it applies to\n> both based on how it is positioned, but that's clearly not true.\n> Second, the naming is really hard to understand. Third, there don't\n> seem to be comments explaining the theory of what is going on here.\n> Fourth, stateGeneration refers not to which generation of state we've\n> got here but to the combination of the state and the generation.\n> However, it's not clear that we ever really use the generation for\n> anything.\n>\n> I think that the direction you went with this is somewhat different\n> from what I had in mind. That may be OK, but let me just explain the\n> difference. We both had in mind the idea that the low two bits of the\n> state would represent the current state and the upper bits would\n> represent the state generation. However, I wasn't necessarily\n> imagining that the only supported operation was making the combined\n> value go up by 1. For instance, I had thought that perhaps the effect\n> of trying to go read-only when we're in the middle of going read-write\n> would be to cancel the previous operation and start the new one. What\n> you have instead is that it errors out. So in your model a change\n> always has to finish before the next one can start, which in turn\n> means that the sequence is completely linear. In my idea the\n> state+generation might go from say 1 to 7, because trying to go\n> read-write would cancel the previous attempt to go read-only and\n> replace it with an attempt to go the other direction, and from 7 we\n> might go to to 9 if somebody now tries to go read-only again before\n> that finishes. In your model, there's never any sort of cancellation\n> of that kind, so you can only go 0->1->2->3->4->5->6->7->8->9 etc.\n>\n\nYes, that made implementation quite simple. I was under the impression that we\nmight not have that much concurrency that so many backends might be trying to\nchange the system state so quickly.\n\n> One disadvantage of the way you've got it from a user perspective is\n> that if I'm writing a tool, I might get an error telling me that the\n> state change I'm trying to make is already in progress, and then I\n> have to retry. With the other design, I might attempt a state change\n> and have it fail because the change can't be completed, but I won't\n> ever fail because I attempt a state change and it can't be started\n> because we're in the wrong starting state. So, with this design, as\n> the tool author, I may not be able to just say, well, I tried to\n> change the state and it didn't work, so report the error to the user.\n> I think with the other approach that would be more viable. But I might\n> be wrong here; it would be interesting to hear what other people\n> think.\n>\n\nThinking a little bit more, I agree that your approach is more viable as it can\ncancel previously in-progress state.\n\nFor e.g. in a graceful failure future, the master might have detected that he\nlost the connection to all standby and immediately calls the function to change\nthe system state to read-only. But, it regains the connection soon and wants to\nback to read-write then it might need to wait until the previous state\ncompletion. That might be the worst if the system is quite busy and/or any\nbackend which might have stuck or too busy and could not absorb the barrier.\n\nIf you want, I try to change the way you have thought, in the next version.\n\n> I dislike the use of the term state_gen or StateGen to refer to the\n> combination of a state and a generation. That seems unintuitive. I'm\n> tempted to propose that we just call it a counter, and, assuming we\n> stick with the design as you now have it, explain it with a comment\n> like this in walprohibit.h:\n>\n> \"There are four possible states. A brand new database cluster is\n> always initially WALPROHIBIT_STATE_READ_WRITE. If the user tries to\n> make it read only, then we enter the state\n> WALPROHIBIT_STATE_GOING_READ_ONLY. When the transition is complete, we\n> enter the state WALPROHIBIT_STATE_READ_ONLY. If the user subsequently\n> tries to make it read write, we will enter the state\n> WALPROHIBIT_STATE_GOING_READ_WRITE. When that transition is complete,\n> we will enter the state WALPROHIBIT_STATE_READ_WRITE. These four state\n> transitions are the only ones possible; for example, if we're\n> currently in state WALPROHIBIT_STATE_GOING_READ_ONLY, an attempt to go\n> read-write will produce an error, and a second attempt to go read-only\n> will not cause a state change. Thus, we can represent the state as a\n> shared-memory counter that whose value only ever changes by adding 1.\n> The initial value at postmaster startup is either 0 or 2, depending on\n> whether the control file specifies the the system is starting\n> read-only or read-write.\"\n>\n\nThanks, added the same.\n\n> And then maybe change all the state_gen references to reference\n> wal_prohibit_counter or, where a shorter name is appropriate, counter.\n>\n\nDone.\n\n> I think this might be clearer if we used different data types for the\n> state and the state/generation combination, with functions to convert\n> between them. e.g. instead of define WALPROHIBIT_STATE_READ_WRITE 0\n> etc. maybe do:\n>\n> typedef enum { ... = 0, ... = 1, ... = 2, ... = 3 } WALProhibitState;\n>\n> And then instead of WALPROHIBIT_CURRENT_STATE perhaps something like:\n>\n> static inline WALProhibitState\n> GetWALProhibitState(uint32 wal_prohibit_counter)\n> {\n> return (WALProhibitState) (wal_prohibit_counter & 3);\n> }\n>\n\nDone.\n\n> I don't really know why we need WALPROHIBIT_NEXT_STATE at all,\n> honestly. It's just a macro to add 1 to an integer. And you don't even\n> use it consistently. Like pg_alter_wal_prohibit_state() does this:\n>\n> + /* Server is already in requested state */\n> + if (WALPROHIBIT_NEXT_STATE(new_transition_state) == cur_state)\n> + PG_RETURN_VOID();\n>\n> But then later does this:\n>\n> + next_state_gen = cur_state_gen + 1;\n>\n> Which is exactly the same thing as what you computed above using\n> WALPROHIBIT_NEXT_STATE() but spelled differently. I am not exactly\n> sure how to structure this to make it as simple as possible, but I\n> don't think this is it.\n>\n> Honestly this whole logic here seems correct but a bit hard to follow.\n> Like, maybe:\n>\n> wal_prohibit_counter = pg_atomic_read_u32(&WALProhibitState->shared_counter);\n> switch (GetWALProhibitState(wal_prohibit_counter))\n> {\n> case WALPROHIBIT_STATE_READ_WRITE:\n> if (!walprohibit) return;\n> increment = true;\n> break;\n> case WALPROHIBIT_STATE_GOING_READ_WRITE:\n> if (walprohibit) ereport(ERROR, ...);\n> break;\n> ...\n> }\n>\n> And then just:\n>\n> if (increment)\n> wal_prohibit_counter =\n> pg_atomic_add_fetch_u32(&WALProhibitState->shared_counter, 1);\n> target_counter_value = wal_prohibit_counter + 1;\n> // random stuff\n> // eventually wait until the counter reaches >= target_counter_value\n>\n> This might not be exactly the right idea though. I'm just looking for\n> a way to make it clearer, because I find it a bit hard to understand\n> right now. Maybe you or someone else will have a better idea.\n>\n\nYeah, this makes code much cleaner than before, did the same in the attached\nversion. Thanks again.\n\n> + success =\n> pg_atomic_compare_exchange_u32(&WALProhibitState->shared_state_generation,\n> +\n> &cur_state_gen, next_state_gen);\n> + Assert(success);\n>\n> I am almost positive that this is not OK. I think on some platforms\n> atomics just randomly fail some percentage of the time. You always\n> need a retry loop. Anyway, what happens if two people enter this\n> function at the same time and both read the same starting counter\n> value before either does anything?\n>\n> + /* To be sure that any later reads of memory happen\n> strictly after this. */\n> + pg_memory_barrier();\n>\n> You don't need a memory barrier after use of an atomic. The atomic\n> includes a barrier.\n\nUnderstood, removed.\n\nRegards,\nAmul",
"msg_date": "Thu, 28 Jan 2021 17:46:28 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 7:17 AM Amul Sul <sulamul@gmail.com> wrote:\n> I am still on this. The things that worried me here are the wal records sequence\n> being written in the startup process -- UpdateFullPageWrites() generate record\n> just before the recovery check-point record and XLogReportParameters() just\n> after that but before any other backend could write any wal record. We might\n> also need to follow the same sequence while changing the system to read-write.\n\nI was able to chat with Andres about this topic for a while today and\nhe made some proposals which seemed pretty good to me. I can't promise\nthat what I'm about to write is an entirely faithful representation of\nwhat he said, but hopefully it's not so far off that he gets mad at me\nor something.\n\n1. If the server starts up and is read-only and\nArchiveRecoveryRequested, clear the read-only state in memory and also\nin the control file, log a message saying that this has been done, and\nproceed. This makes some other cases simpler to deal with.\n\n2. Create a new function with a name like XLogAcceptWrites(). Move the\nfollowing things from StartupXLOG() into that function: (1) the call\nto UpdateFullPageWrites(), (2) the following block of code that does\neither CreateEndOfRecoveryRecord() or RequestCheckpoint() or\nCreateCheckPoint(), (3) the next block of code that runs\nrecovery_end_command, (4) the call to XLogReportParameters(), and (5)\nthe call to CompleteCommitTsInitialization(). Call the new function\nfrom the place where we now call XLogReportParameters(). This would\nmean that (1)-(3) happen later than they do now, which might require\nsome adjustments.\n\n3. If the system is starting up read only (and the read-only state\ndidn't get cleared because of #1 above) then don't call\nXLogAcceptWrites() at the end of StartupXLOG() and instead have the\ncheckpointer do it later when the system is going read-write for the\nfirst time.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Feb 2021 17:11:06 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nOn 2021-02-16 17:11:06 -0500, Robert Haas wrote:\n> I can't promise that what I'm about to write is an entirely faithful\n> representation of what he said, but hopefully it's not so far off that\n> he gets mad at me or something.\n\nSeems accurate - and also I'm way too tired that I'd be mad ;)\n\n\n> 1. If the server starts up and is read-only and\n> ArchiveRecoveryRequested, clear the read-only state in memory and also\n> in the control file, log a message saying that this has been done, and\n> proceed. This makes some other cases simpler to deal with.\n\nIt seems also to make sense from a behaviour POV to me: Imagine a\n\"smooth\" planned failover with ASRO:\n1) ASRO on primary\n2) promote standby\n3) edit primary config to include primary_conninfo, add standby.signal\n4) restart \"read only primary\"\n\nThere's not really any spot in which it'd be useful to do disable ASRO,\nright? But 4) should make the node a normal standby.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Feb 2021 18:20:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Feb 17, 2021 at 7:50 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-02-16 17:11:06 -0500, Robert Haas wrote:\n\nThank you very much to both of you !\n\n> > I can't promise that what I'm about to write is an entirely faithful\n> > representation of what he said, but hopefully it's not so far off that\n> > he gets mad at me or something.\n>\n> Seems accurate - and also I'm way too tired that I'd be mad ;)\n>\n>\n> > 1. If the server starts up and is read-only and\n> > ArchiveRecoveryRequested, clear the read-only state in memory and also\n> > in the control file, log a message saying that this has been done, and\n> > proceed. This makes some other cases simpler to deal with.\n>\n> It seems also to make sense from a behaviour POV to me: Imagine a\n> \"smooth\" planned failover with ASRO:\n> 1) ASRO on primary\n> 2) promote standby\n> 3) edit primary config to include primary_conninfo, add standby.signal\n> 4) restart \"read only primary\"\n>\n> There's not really any spot in which it'd be useful to do disable ASRO,\n> right? But 4) should make the node a normal standby.\n>\n\nUnderstood.\n\nIn the attached version I have made the changes accordingly what Robert has\nsummarised in his previous mail[1].\n\nIn addition to that, I also move the code that updates the control file to\nXLogAcceptWrites() which will also get skipped when the system is read-only (wal\nprohibited). The system will be in the crash recovery, and that will\nchange once we do the end-of-recovery checkpoint and the WAL writes operation\nwhich we were skipping from startup. The benefit of keeping the system in\nrecovery mode is that it fixes my concern[2] where other backends could connect\nand write wal records while we were changing the system to read-write. Now, no\nother backends allow a wal write; UpdateFullPageWrites(), end-of-recovery\ncheckpoint, and XLogReportParameters() operations will be performed in the same\nsequence as it is in the startup while changing the system to read-write.\n\nRegards,\nAmul\n\n1] http://postgr.es/m/CA+TgmoZ=CCTbAXxMTYZoGXEgqzOz9smkBWrDpsacpjvFcGCuaw@mail.gmail.com\n2] http://postgr.es/m/CAAJ_b97xX-nqRyM_uXzecpH9aSgoMROrDNhrg1N51fDCDwoy2g@mail.gmail.com",
"msg_date": "Fri, 19 Feb 2021 17:43:08 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is the rebase version for the latest master head(b3a9e9897ec).\n\nRegards,\nAmul",
"msg_date": "Fri, 26 Feb 2021 17:10:34 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Feb 19, 2021 at 5:43 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> In the attached version I have made the changes accordingly what Robert has\n> summarised in his previous mail[1].\n>\n> In addition to that, I also move the code that updates the control file to\n> XLogAcceptWrites() which will also get skipped when the system is read-only (wal\n> prohibited). The system will be in the crash recovery, and that will\n> change once we do the end-of-recovery checkpoint and the WAL writes operation\n> which we were skipping from startup. The benefit of keeping the system in\n> recovery mode is that it fixes my concern[2] where other backends could connect\n> and write wal records while we were changing the system to read-write. Now, no\n> other backends allow a wal write; UpdateFullPageWrites(), end-of-recovery\n> checkpoint, and XLogReportParameters() operations will be performed in the same\n> sequence as it is in the startup while changing the system to read-write.\n\nI was looking into the changes espcially recovery related problem, I\nhave a few questions\n\n1.\n+static bool\n+XLogAcceptWrites(bool needChkpt, bool bgwriterLaunched,\n+ bool localPromoteIsTriggered, XLogReaderState *xlogreader,\n+ bool archiveRecoveryRequested, TimeLineID endOfLogTLI,\n+ XLogRecPtr endOfLog, TimeLineID thisTimeLineID)\n+{\n+ bool promoted = false;\n+\n+ /*\n.....\n+ if (localPromoteIsTriggered)\n {\n- checkPointLoc = ControlFile->checkPoint;\n+ XLogRecord *record;\n\n...\n+ record = ReadCheckpointRecord(xlogreader,\n+ ControlFile->checkPoint,\n+ 1, false);\n if (record != NULL)\n {\n promoted = true;\n ...\n CreateEndOfRecoveryRecord();\n }\n\nWhy do we need to move promote related code in XLogAcceptWrites?\nIMHO, this promote related handling should be in StartupXLOG only.\nThat will look cleaner.\n\n>\n> 1] http://postgr.es/m/CA+TgmoZ=CCTbAXxMTYZoGXEgqzOz9smkBWrDpsacpjvFcGCuaw@mail.gmail.com\n> 2] http://postgr.es/m/CAAJ_b97xX-nqRyM_uXzecpH9aSgoMROrDNhrg1N51fDCDwoy2g@mail.gmail.com\n\n2.\nI did not clearly understand your concern in point [2], because of\nwhich you have to postpone RECOVERY_STATE_DONE untill system is set\nback to read-write. Can you explain this?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Mar 2021 17:52:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 5:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Feb 19, 2021 at 5:43 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > In the attached version I have made the changes accordingly what Robert has\n> > summarised in his previous mail[1].\n> >\n> > In addition to that, I also move the code that updates the control file to\n> > XLogAcceptWrites() which will also get skipped when the system is read-only (wal\n> > prohibited). The system will be in the crash recovery, and that will\n> > change once we do the end-of-recovery checkpoint and the WAL writes operation\n> > which we were skipping from startup. The benefit of keeping the system in\n> > recovery mode is that it fixes my concern[2] where other backends could connect\n> > and write wal records while we were changing the system to read-write. Now, no\n> > other backends allow a wal write; UpdateFullPageWrites(), end-of-recovery\n> > checkpoint, and XLogReportParameters() operations will be performed in the same\n> > sequence as it is in the startup while changing the system to read-write.\n>\n> I was looking into the changes espcially recovery related problem, I\n> have a few questions\n>\n> 1.\n> +static bool\n> +XLogAcceptWrites(bool needChkpt, bool bgwriterLaunched,\n> + bool localPromoteIsTriggered, XLogReaderState *xlogreader,\n> + bool archiveRecoveryRequested, TimeLineID endOfLogTLI,\n> + XLogRecPtr endOfLog, TimeLineID thisTimeLineID)\n> +{\n> + bool promoted = false;\n> +\n> + /*\n> .....\n> + if (localPromoteIsTriggered)\n> {\n> - checkPointLoc = ControlFile->checkPoint;\n> + XLogRecord *record;\n>\n> ...\n> + record = ReadCheckpointRecord(xlogreader,\n> + ControlFile->checkPoint,\n> + 1, false);\n> if (record != NULL)\n> {\n> promoted = true;\n> ...\n> CreateEndOfRecoveryRecord();\n> }\n>\n> Why do we need to move promote related code in XLogAcceptWrites?\n> IMHO, this promote related handling should be in StartupXLOG only.\n\nXLogAcceptWrites() tried to club all the WAL write operations that happen at the\nend of StartupXLOG(). The only exception is that promotion checkpoint.\n\n> That will look cleaner.\n\nI think it would be better to move the promotion checkpoint call inside\nXLogAcceptWrites() as well. So that we can say XLogAcceptWrites() is a part of\nStartupXLOG() does the required WAL writes. Thoughts?\n\n>\n> >\n> > 1] http://postgr.es/m/CA+TgmoZ=CCTbAXxMTYZoGXEgqzOz9smkBWrDpsacpjvFcGCuaw@mail.gmail.com\n> > 2] http://postgr.es/m/CAAJ_b97xX-nqRyM_uXzecpH9aSgoMROrDNhrg1N51fDCDwoy2g@mail.gmail.com\n>\n> 2.\n> I did not clearly understand your concern in point [2], because of\n> which you have to postpone RECOVERY_STATE_DONE untill system is set\n> back to read-write. Can you explain this?\n>\n\nSure, for that let me explain how this transition to read-write occurs. When a\nbackend executes a function (ie. pg_prohibit_wal(false)) to make the system\nread-write then that system state changes will be conveyed by the Checkpointer\nprocess to all existing backends using global barrier and while Checkpointer in\nthe progress of conveying this barrier, any existing backends who might have\nabsorbed barriers can write new records.\n\nWe don't want that to happen in cases where previous recovery-end-checkpoint is\nskipped in startup. We want Checkpointer first to convey the barrier to all\nbackends but, the backend shouldn't write wal until the Checkpointer writes\nrecovery-end-checkpoint record.\n\nTo refrain these backends from writing WAL I think we should keep the server in\ncrash recovery mode until UpdateFullPageWrites(),\nend-of-recovery-checkpoint, and XLogReportParameters() are performed.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 2 Mar 2021 19:54:15 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 7:54 PM Amul Sul <sulamul@gmail.com> wrote:\n> XLogAcceptWrites() tried to club all the WAL write operations that happen at the\n> end of StartupXLOG(). The only exception is that promotion checkpoint.\n\nOkay, I was expecting that XLogAcceptWrites should have all the WAL\nwrite-related operations which should either executed at the end of\nStartupXLOG() if the system is not read-only or after the system is\nset back to read-write. But promotion-related code is completely\nirrelevant when it is executed from PerformPendingStartupOperations.\nSo I am not entirely sure that whether we want to keep those stuff in\nXLogAcceptWrites.\n\n> > That will look cleaner.\n>\n> I think it would be better to move the promotion checkpoint call inside\n> XLogAcceptWrites() as well. So that we can say XLogAcceptWrites() is a part of\n> StartupXLOG() does the required WAL writes. Thoughts?\n\nOkay so if we want to keep all the WAL write inside XLogAcceptWrites\nincluding promotion-related stuff then +1 for moving this also inside\nXLogAcceptWrites.\n\n> > >\n> > > 1] http://postgr.es/m/CA+TgmoZ=CCTbAXxMTYZoGXEgqzOz9smkBWrDpsacpjvFcGCuaw@mail.gmail.com\n> > > 2] http://postgr.es/m/CAAJ_b97xX-nqRyM_uXzecpH9aSgoMROrDNhrg1N51fDCDwoy2g@mail.gmail.com\n> >\n> > 2.\n> > I did not clearly understand your concern in point [2], because of\n> > which you have to postpone RECOVERY_STATE_DONE untill system is set\n> > back to read-write. Can you explain this?\n> >\n>\n> Sure, for that let me explain how this transition to read-write occurs. When a\n> backend executes a function (ie. pg_prohibit_wal(false)) to make the system\n> read-write then that system state changes will be conveyed by the Checkpointer\n> process to all existing backends using global barrier and while Checkpointer in\n> the progress of conveying this barrier, any existing backends who might have\n> absorbed barriers can write new records.\n>\n> We don't want that to happen in cases where previous recovery-end-checkpoint is\n> skipped in startup. We want Checkpointer first to convey the barrier to all\n> backends but, the backend shouldn't write wal until the Checkpointer writes\n> recovery-end-checkpoint record.\n>\n> To refrain these backends from writing WAL I think we should keep the server in\n> crash recovery mode until UpdateFullPageWrites(),\n> end-of-recovery-checkpoint, and XLogReportParameters() are performed.\n\nThanks for the explanation. Now, I understand the problem, however, I\nam not sure that whether keeping the system in recovery is the best\nway to solve this but as of now I don't have anything better to\nsuggest, and immediately I couldn’t think of any problem with this\nsolution. But I will think about this again.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Mar 2021 21:01:41 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 9:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> >\n> > We don't want that to happen in cases where previous recovery-end-checkpoint is\n> > skipped in startup. We want Checkpointer first to convey the barrier to all\n> > backends but, the backend shouldn't write wal until the Checkpointer writes\n> > recovery-end-checkpoint record.\n> >\n> > To refrain these backends from writing WAL I think we should keep the server in\n> > crash recovery mode until UpdateFullPageWrites(),\n> > end-of-recovery-checkpoint, and XLogReportParameters() are performed.\n\nI did not read the code for this, but let me ask something about this\ncase. Why do we want checkpointer to convey the barrier to all the\nbackend before completing the end of recovery checkpoint and other\nstuff? Is it because the system is still in WAL prohibited state? Is\nit possible that as soon as we get the pg_prohibit_wal(false) request\nthe receiving backend start allowing the WAL writing for itself and\nfinish the all post-recovery pending work and then inform the\ncheckpointer to inform all other backends?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Mar 2021 12:07:53 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 12:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 9:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > >\n> > > We don't want that to happen in cases where previous recovery-end-checkpoint is\n> > > skipped in startup. We want Checkpointer first to convey the barrier to all\n> > > backends but, the backend shouldn't write wal until the Checkpointer writes\n> > > recovery-end-checkpoint record.\n> > >\n> > > To refrain these backends from writing WAL I think we should keep the server in\n> > > crash recovery mode until UpdateFullPageWrites(),\n> > > end-of-recovery-checkpoint, and XLogReportParameters() are performed.\n>\n> I did not read the code for this, but let me ask something about this\n> case. Why do we want checkpointer to convey the barrier to all the\n> backend before completing the end of recovery checkpoint and other\n> stuff? Is it because the system is still in WAL prohibited state?\n\nConsider the previous case, where the user wants to change the system to\nread-write. When a permitted user executes pg_prohibit_wal(false), the wal\nprohibited state in shared memory updated to GOING_READ_WRITE\nwhich is the transition state and then waits until the transition state\ncompletes and the final state (i.e. READ_WRITE) gets updated\nin shared memory. To set the final set is a job of the Checkpointer process.\n\nWe have integrated code into the Checkpointer process such that if it sees wal\nprohibit transition state then it completes that as soon as possible by doing\nnecessary steps i.e. emitting super barriers, then update the final wal\nprohibited state in shared memory and in control file.\n\n> Is\n> it possible that as soon as we get the pg_prohibit_wal(false) request\n> the receiving backend start allowing the WAL writing for itself and\n> finish the all post-recovery pending work and then inform the\n> checkpointer to inform all other backends?\n>\n\nYes, it is possible to allow wal temporarily for itself by setting\nLocalXLogInsertAllowed, but when we request Checkpointer for the end-of-recovery\ncheckpoint, the first thing it will do is that wal prohibit state transition\nthen recovery-end-checkpoint.\n\nAlso, allowing WAL write in read-only (WAL prohibited state) mode is against\nthis feature principle.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 3 Mar 2021 16:50:19 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 4:50 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Mar 3, 2021 at 12:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> Yes, it is possible to allow wal temporarily for itself by setting\n> LocalXLogInsertAllowed, but when we request Checkpointer for the end-of-recovery\n> checkpoint, the first thing it will do is that wal prohibit state transition\n> then recovery-end-checkpoint.\n>\n> Also, allowing WAL write in read-only (WAL prohibited state) mode is against\n> this feature principle.\n\nSo IIUC before the checkpoint change the state in the control file we\nanyway inform other backend and then they are allowed to write the WAL\nis the right? If that is true then what is the problem in first doing\nthe pending post-recovery process and then informing the backend about\nthe state change. I mean we are in a process of changing the state to\nread-write so why it is necessary to inform all the backend before we\ncan write WAL? Are we afraid that after we write the WAL and if there\nis some failure before we make the system read-write then it will\nbreak the principle of the feature, I mean eventually system will stay\nread-only but we wrote the WAL? If so then currently, are we assuring\nthat once we inform the backend and backend are allowed to write the\nWAL there are no chances of failure and the system state is guaranteed\nto be changed. If all that is true then I will take my point back.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Mar 2021 19:56:06 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Why do we need to move promote related code in XLogAcceptWrites?\n> IMHO, this promote related handling should be in StartupXLOG only.\n> That will look cleaner.\n\nThe key design question here, at least in my mind, is what exactly\nhappens after prohibit-WAL + system-crash + recovery-finishes. We\nclearly can't write the checkpoint or end-of-recovery record and\nproceed with business as usual, but are we still in recovery? Either\n(1) we are technically still in recovery, stopping just short of\nentering normal running, and will emerge from recovery when WAL is\npermitted again; or (2) we have technically finished recovery, but\ndeferred some of the actions that would normally occur at that time\nuntil a later point. Maybe this is academic distinction as much as\nanything, but the idea is if we choose #1 then we should do as little\nas possible at the point when recovery finishes and defer as much as\npossible until we actually enter normal running; whereas if we choose\n#2 we should do as much as possible at the point when recovery\nfinishes and defer only those things which absolutely have to be\ndeferred. That said, I and I think also Andres are voting for #2.\n\nBut if we go that way, that precludes what you are proposing here. If\nwe picked #1 then it would be natural for the startup process to\nremain active and the control file update to be postponed until WAL\nwrites are re-enabled; but under model #2 we want, if possible, for\nthe startup process to exit and the control file update to happen\nnormally, and only the writing of the actual WAL records to be\ndeferred.\n\nWhat I find much odder, looking at the present patch, is that\nPerformPendingStartupOperations() gets called from pg_prohibit_wal()\nrather than by the checkpointer. If the checkpointer is the process\nthat is in charge of coordinating the change between a read-only state\nand a read-write state, then it ought to also do this. I also think\nthat the PerformPendingStartupOperations() wrapper is unnecessary.\nJust invert the sense of the XLogCtl flag: xlogAllowWritesDone, rather\nthan startupCrashRecoveryPending, and have XLogAcceptWrites() set it\n(and return without doing anything if it's already set). Then the\ncheckpointer can just call the function unconditionally whenever we go\nread-write, and for a bonus we will have much better naming\nconsistency, rather than calling the same thing \"xlog accept writes\"\nin one place, \"pending startup operations\" in another, and \"startup\ncrash recovery pending\" in a third.\n\nSince this feature is basically no longer \"alter system read only\" but\nrather \"pg_prohibit_wal\" I think we also ought to rename the GUC,\nsystem_is_read_only -> wal_prohibited.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:26:09 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Why do we need to move promote related code in XLogAcceptWrites?\n> > IMHO, this promote related handling should be in StartupXLOG only.\n> > That will look cleaner.\n>\n> The key design question here, at least in my mind, is what exactly\n> happens after prohibit-WAL + system-crash + recovery-finishes. We\n> clearly can't write the checkpoint or end-of-recovery record and\n> proceed with business as usual, but are we still in recovery? Either\n> (1) we are technically still in recovery, stopping just short of\n> entering normal running, and will emerge from recovery when WAL is\n> permitted again; or (2) we have technically finished recovery, but\n> deferred some of the actions that would normally occur at that time\n> until a later point. Maybe this is academic distinction as much as\n> anything, but the idea is if we choose #1 then we should do as little\n> as possible at the point when recovery finishes and defer as much as\n> possible until we actually enter normal running; whereas if we choose\n> #2 we should do as much as possible at the point when recovery\n> finishes and defer only those things which absolutely have to be\n> deferred. That said, I and I think also Andres are voting for #2.\n>\n> But if we go that way, that precludes what you are proposing here. If\n> we picked #1 then it would be natural for the startup process to\n> remain active and the control file update to be postponed until WAL\n> writes are re-enabled; but under model #2 we want, if possible, for\n> the startup process to exit and the control file update to happen\n> normally, and only the writing of the actual WAL records to be\n> deferred.\n>\n\nCurrent patch doing a mix of both, startup process exits without doing\nWAL writes and control file updates, that happens later when system\nchanges to read-write.\n\n> What I find much odder, looking at the present patch, is that\n> PerformPendingStartupOperations() gets called from pg_prohibit_wal()\n> rather than by the checkpointer. If the checkpointer is the process\n> that is in charge of coordinating the change between a read-only state\n> and a read-write state, then it ought to also do this. I also think\n> that the PerformPendingStartupOperations() wrapper is unnecessary.\n> Just invert the sense of the XLogCtl flag: xlogAllowWritesDone, rather\n> than startupCrashRecoveryPending, and have XLogAcceptWrites() set it\n> (and return without doing anything if it's already set). Then the\n> checkpointer can just call the function unconditionally whenever we go\n> read-write, and for a bonus we will have much better naming\n> consistency, rather than calling the same thing \"xlog accept writes\"\n> in one place, \"pending startup operations\" in another, and \"startup\n> crash recovery pending\" in a third.\n>\n\nOk, in the attached version, I have used the xlogAllowWritesDone variable.\nTo match the naming sense, it should be set to 'false' initially and\nshould get set to 'true' when the XLogAcceptWrites() operation completes.\n\nI have removed the PerformPendingStartupOperations() wrapper function and I have\nslightly changed XLogAcceptWrites() to minimize its parameter count so that it\ncan use available global variable values instead of parameters. Unfortunately,\nit cannot be called from checkpointer unconditionally, it will create a race\nwith startup process when startup process still in recovery and checkpointer\nlaunches and see that xlogAllowWritesDone = false, will go-ahead for those wal\nwrite operations and end-of-recovery checkpoint which will be a\ndisaster. Therefore, I moved this XLogAcceptWrites() function inside\nProcessWALProhibitStateChangeRequest() and called when the system is in\nGOING_READ_WRITE transition state. Since ProcessWALProhibitStateChangeRequest()\ngets called from a different places of checkpointer process which creates a\ncascaded call to XLogAcceptWrites() function, to avoid that I am updating\nxlogAllowWritesDone = true immediately after it gets checked in\nXLogAcceptWrites() which I think is not the right approach, technically, it\nshould be updated at the end of XLogAcceptWrites().\n\nI think instead of xlogAllowWritesDone, we should use invert of it, as\nthe previous, e.g.xlogAllowWritesPending or xlogAllowWritesSkipped or something\nelse and that will be get explicitly set 'true' when we skip XLogAcceptWrites()\ncall. That will avoid the race of checkpointer process with the startup since\ninitially, it will be 'false', and if it is 'false' we will return immediately\nfrom XLogAcceptWrites(). Also, we don't need to move XLogAcceptWrites() inside\nProcessWALProhibitStateChangeRequest(), it can be called from checkpointerMain()\nloop which also avoids cascade calls and we don't need to update it until we\ncomplete those write operations. Thoughts/Comments?\n\n> Since this feature is basically no longer \"alter system read only\" but\n> rather \"pg_prohibit_wal\" I think we also ought to rename the GUC,\n> system_is_read_only -> wal_prohibited.\n>\n\nDone.\n\nRegards,\nAmul",
"msg_date": "Thu, 4 Mar 2021 23:02:27 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 7:56 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Mar 3, 2021 at 4:50 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Wed, Mar 3, 2021 at 12:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > Yes, it is possible to allow wal temporarily for itself by setting\n> > LocalXLogInsertAllowed, but when we request Checkpointer for the end-of-recovery\n> > checkpoint, the first thing it will do is that wal prohibit state transition\n> > then recovery-end-checkpoint.\n> >\n> > Also, allowing WAL write in read-only (WAL prohibited state) mode is against\n> > this feature principle.\n>\n> So IIUC before the checkpoint change the state in the control file we\n> anyway inform other backend and then they are allowed to write the WAL\n> is the right? If that is true then what is the problem in first doing\n> the pending post-recovery process and then informing the backend about\n> the state change. I mean we are in a process of changing the state to\n> read-write so why it is necessary to inform all the backend before we\n> can write WAL? Are we afraid that after we write the WAL and if there\n> is some failure before we make the system read-write then it will\n> break the principle of the feature, I mean eventually system will stay\n> read-only but we wrote the WAL? If so then currently, are we assuring\n> that once we inform the backend and backend are allowed to write the\n> WAL there are no chances of failure and the system state is guaranteed\n> to be changed. If all that is true then I will take my point back.\n>\n\nThe wal prohibit state transition handling code is integrated into various\nplaces of the checkpointer process so that it can pick state changes as soon as\npossible. Before informing other backends we can do UpdateFullPageWrites() but\nwhen we try to next the end-of-recovery checkpoint write operation, the\nCheckpointer will hit ProcessWALProhibitStateChangeRequest() first which will\ntry for the wal prohibit transition state completion and then write\nthe checkpoint\nrecord.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 4 Mar 2021 23:21:32 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Why do we need to move promote related code in XLogAcceptWrites?\n> > IMHO, this promote related handling should be in StartupXLOG only.\n> > That will look cleaner.\n>\n> The key design question here, at least in my mind, is what exactly\n> happens after prohibit-WAL + system-crash + recovery-finishes. We\n> clearly can't write the checkpoint or end-of-recovery record and\n> proceed with business as usual, but are we still in recovery? Either\n> (1) we are technically still in recovery, stopping just short of\n> entering normal running, and will emerge from recovery when WAL is\n> permitted again; or (2) we have technically finished recovery, but\n> deferred some of the actions that would normally occur at that time\n> until a later point. Maybe this is academic distinction as much as\n> anything, but the idea is if we choose #1 then we should do as little\n> as possible at the point when recovery finishes and defer as much as\n> possible until we actually enter normal running; whereas if we choose\n> #2 we should do as much as possible at the point when recovery\n> finishes and defer only those things which absolutely have to be\n> deferred. That said, I and I think also Andres are voting for #2.\n>\n> But if we go that way, that precludes what you are proposing here. If\n> we picked #1 then it would be natural for the startup process to\n> remain active and the control file update to be postponed until WAL\n> writes are re-enabled; but under model #2 we want, if possible, for\n> the startup process to exit and the control file update to happen\n> normally, and only the writing of the actual WAL records to be\n> deferred.\n\nMaybe I did not put my point clearly, let me clarify that. First, I\nwas also inclined that it should work like #2. And, if it works like\n#2 then I would assume that the code goes in XLogAcceptWrites function\nshould be minimal, only those part which we want to execute after the\nsystem is back to read-write mode. So basically, the XLogAcceptWrites\nshould only keep the code that is common code which we want to execute\nat the end of the StartupXLog if the system is normal or we want to\nexecute when the system is back to read-write if it was read only. So\nmy point was all the uncommon code that we have moved into\nXLogAcceptWrites should be kept inside the StartupXLog function only.\nSo I think the promotion-related code doesn't belong to the\nXLogAcceptWrites function.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Mar 2021 16:36:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 11:02 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Mar 3, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > Why do we need to move promote related code in XLogAcceptWrites?\n> > > IMHO, this promote related handling should be in StartupXLOG only.\n> > > That will look cleaner.\n> >\n> > The key design question here, at least in my mind, is what exactly\n> > happens after prohibit-WAL + system-crash + recovery-finishes. We\n> > clearly can't write the checkpoint or end-of-recovery record and\n> > proceed with business as usual, but are we still in recovery? Either\n> > (1) we are technically still in recovery, stopping just short of\n> > entering normal running, and will emerge from recovery when WAL is\n> > permitted again; or (2) we have technically finished recovery, but\n> > deferred some of the actions that would normally occur at that time\n> > until a later point. Maybe this is academic distinction as much as\n> > anything, but the idea is if we choose #1 then we should do as little\n> > as possible at the point when recovery finishes and defer as much as\n> > possible until we actually enter normal running; whereas if we choose\n> > #2 we should do as much as possible at the point when recovery\n> > finishes and defer only those things which absolutely have to be\n> > deferred. That said, I and I think also Andres are voting for #2.\n> >\n> > But if we go that way, that precludes what you are proposing here. If\n> > we picked #1 then it would be natural for the startup process to\n> > remain active and the control file update to be postponed until WAL\n> > writes are re-enabled; but under model #2 we want, if possible, for\n> > the startup process to exit and the control file update to happen\n> > normally, and only the writing of the actual WAL records to be\n> > deferred.\n> >\n>\n> Current patch doing a mix of both, startup process exits without doing\n> WAL writes and control file updates, that happens later when system\n> changes to read-write.\n>\n> > What I find much odder, looking at the present patch, is that\n> > PerformPendingStartupOperations() gets called from pg_prohibit_wal()\n> > rather than by the checkpointer. If the checkpointer is the process\n> > that is in charge of coordinating the change between a read-only state\n> > and a read-write state, then it ought to also do this. I also think\n> > that the PerformPendingStartupOperations() wrapper is unnecessary.\n> > Just invert the sense of the XLogCtl flag: xlogAllowWritesDone, rather\n> > than startupCrashRecoveryPending, and have XLogAcceptWrites() set it\n> > (and return without doing anything if it's already set). Then the\n> > checkpointer can just call the function unconditionally whenever we go\n> > read-write, and for a bonus we will have much better naming\n> > consistency, rather than calling the same thing \"xlog accept writes\"\n> > in one place, \"pending startup operations\" in another, and \"startup\n> > crash recovery pending\" in a third.\n> >\n>\n> Ok, in the attached version, I have used the xlogAllowWritesDone variable.\n> To match the naming sense, it should be set to 'false' initially and\n> should get set to 'true' when the XLogAcceptWrites() operation completes.\n>\n> I have removed the PerformPendingStartupOperations() wrapper function and I have\n> slightly changed XLogAcceptWrites() to minimize its parameter count so that it\n> can use available global variable values instead of parameters. Unfortunately,\n> it cannot be called from checkpointer unconditionally, it will create a race\n> with startup process when startup process still in recovery and checkpointer\n> launches and see that xlogAllowWritesDone = false, will go-ahead for those wal\n> write operations and end-of-recovery checkpoint which will be a\n> disaster. Therefore, I moved this XLogAcceptWrites() function inside\n> ProcessWALProhibitStateChangeRequest() and called when the system is in\n> GOING_READ_WRITE transition state. Since ProcessWALProhibitStateChangeRequest()\n> gets called from a different places of checkpointer process which creates a\n> cascaded call to XLogAcceptWrites() function, to avoid that I am updating\n> xlogAllowWritesDone = true immediately after it gets checked in\n> XLogAcceptWrites() which I think is not the right approach, technically, it\n> should be updated at the end of XLogAcceptWrites().\n>\n> I think instead of xlogAllowWritesDone, we should use invert of it, as\n> the previous, e.g.xlogAllowWritesPending or xlogAllowWritesSkipped or something\n> else and that will be get explicitly set 'true' when we skip XLogAcceptWrites()\n> call. That will avoid the race of checkpointer process with the startup since\n> initially, it will be 'false', and if it is 'false' we will return immediately\n> from XLogAcceptWrites(). Also, we don't need to move XLogAcceptWrites() inside\n> ProcessWALProhibitStateChangeRequest(), it can be called from checkpointerMain()\n> loop which also avoids cascade calls and we don't need to update it until we\n> complete those write operations. Thoughts/Comments?\n>\nIn the attached version, I am able to fix most of the concerns that I had. Right\nnow, having the xlogAllowWritesDone variable is fine, and that will get updated\nat the end of the XLogAcceptWrites() function, unlike the previous.\nXLogAcceptWrites() will be called from ProcessWALProhibitStateChangeRequest()\nwhile the system state changes to read-write, like previous. Now to avoid the\nrecursive call to ProcessWALProhibitStateChangeRequest() from the\nend-of-recovery checkpoint happening in XLogAcceptWrites(), I have added a\nprivate boolean state variable in walprohibit.c, using it wal prohibit state\ntransition can be put on hold for some time; did the same while calling\nXLogAcceptWrites().\n\nRegards,\nAmul",
"msg_date": "Tue, 9 Mar 2021 16:00:30 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 3:31 PM Amul Sul <sulamul@gmail.com> wrote:\n\n> On Thu, Mar 4, 2021 at 11:02 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Wed, Mar 3, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > >\n> > > On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > > > Why do we need to move promote related code in XLogAcceptWrites?\n> > > > IMHO, this promote related handling should be in StartupXLOG only.\n> > > > That will look cleaner.\n> > >\n> > > The key design question here, at least in my mind, is what exactly\n> > > happens after prohibit-WAL + system-crash + recovery-finishes. We\n> > > clearly can't write the checkpoint or end-of-recovery record and\n> > > proceed with business as usual, but are we still in recovery? Either\n> > > (1) we are technically still in recovery, stopping just short of\n> > > entering normal running, and will emerge from recovery when WAL is\n> > > permitted again; or (2) we have technically finished recovery, but\n> > > deferred some of the actions that would normally occur at that time\n> > > until a later point. Maybe this is academic distinction as much as\n> > > anything, but the idea is if we choose #1 then we should do as little\n> > > as possible at the point when recovery finishes and defer as much as\n> > > possible until we actually enter normal running; whereas if we choose\n> > > #2 we should do as much as possible at the point when recovery\n> > > finishes and defer only those things which absolutely have to be\n> > > deferred. That said, I and I think also Andres are voting for #2.\n> > >\n> > > But if we go that way, that precludes what you are proposing here. If\n> > > we picked #1 then it would be natural for the startup process to\n> > > remain active and the control file update to be postponed until WAL\n> > > writes are re-enabled; but under model #2 we want, if possible, for\n> > > the startup process to exit and the control file update to happen\n> > > normally, and only the writing of the actual WAL records to be\n> > > deferred.\n> > >\n> >\n> > Current patch doing a mix of both, startup process exits without doing\n> > WAL writes and control file updates, that happens later when system\n> > changes to read-write.\n> >\n> > > What I find much odder, looking at the present patch, is that\n> > > PerformPendingStartupOperations() gets called from pg_prohibit_wal()\n> > > rather than by the checkpointer. If the checkpointer is the process\n> > > that is in charge of coordinating the change between a read-only state\n> > > and a read-write state, then it ought to also do this. I also think\n> > > that the PerformPendingStartupOperations() wrapper is unnecessary.\n> > > Just invert the sense of the XLogCtl flag: xlogAllowWritesDone, rather\n> > > than startupCrashRecoveryPending, and have XLogAcceptWrites() set it\n> > > (and return without doing anything if it's already set). Then the\n> > > checkpointer can just call the function unconditionally whenever we go\n> > > read-write, and for a bonus we will have much better naming\n> > > consistency, rather than calling the same thing \"xlog accept writes\"\n> > > in one place, \"pending startup operations\" in another, and \"startup\n> > > crash recovery pending\" in a third.\n> > >\n> >\n> > Ok, in the attached version, I have used the xlogAllowWritesDone\n> variable.\n> > To match the naming sense, it should be set to 'false' initially and\n> > should get set to 'true' when the XLogAcceptWrites() operation completes.\n> >\n> > I have removed the PerformPendingStartupOperations() wrapper function\n> and I have\n> > slightly changed XLogAcceptWrites() to minimize its parameter count so\n> that it\n> > can use available global variable values instead of parameters.\n> Unfortunately,\n> > it cannot be called from checkpointer unconditionally, it will create a\n> race\n> > with startup process when startup process still in recovery and\n> checkpointer\n> > launches and see that xlogAllowWritesDone = false, will go-ahead for\n> those wal\n> > write operations and end-of-recovery checkpoint which will be a\n> > disaster. Therefore, I moved this XLogAcceptWrites() function inside\n> > ProcessWALProhibitStateChangeRequest() and called when the system is in\n> > GOING_READ_WRITE transition state. Since\n> ProcessWALProhibitStateChangeRequest()\n> > gets called from a different places of checkpointer process which\n> creates a\n> > cascaded call to XLogAcceptWrites() function, to avoid that I am updating\n> > xlogAllowWritesDone = true immediately after it gets checked in\n> > XLogAcceptWrites() which I think is not the right approach, technically,\n> it\n> > should be updated at the end of XLogAcceptWrites().\n> >\n> > I think instead of xlogAllowWritesDone, we should use invert of it, as\n> > the previous, e.g.xlogAllowWritesPending or xlogAllowWritesSkipped or\n> something\n> > else and that will be get explicitly set 'true' when we skip\n> XLogAcceptWrites()\n> > call. That will avoid the race of checkpointer process with the startup\n> since\n> > initially, it will be 'false', and if it is 'false' we will return\n> immediately\n> > from XLogAcceptWrites(). Also, we don't need to move XLogAcceptWrites()\n> inside\n> > ProcessWALProhibitStateChangeRequest(), it can be called from\n> checkpointerMain()\n> > loop which also avoids cascade calls and we don't need to update it\n> until we\n> > complete those write operations. Thoughts/Comments?\n> >\n> In the attached version, I am able to fix most of the concerns that I had.\n> Right\n> now, having the xlogAllowWritesDone variable is fine, and that will get\n> updated\n> at the end of the XLogAcceptWrites() function, unlike the previous.\n> XLogAcceptWrites() will be called from\n> ProcessWALProhibitStateChangeRequest()\n> while the system state changes to read-write, like previous. Now to avoid\n> the\n> recursive call to ProcessWALProhibitStateChangeRequest() from the\n> end-of-recovery checkpoint happening in XLogAcceptWrites(), I have added a\n> private boolean state variable in walprohibit.c, using it wal prohibit\n> state\n> transition can be put on hold for some time; did the same while calling\n> XLogAcceptWrites().\n>\n> Regards,\n> Amul\n>\n\nOne of the\npatch (v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch)\nfrom the latest patchset does not apply successfully.\n\nhttp://cfbot.cputube.org/patch_32_2602.log\n\n=== applying patch\n./v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch\n\nHunk #15 succeeded at 2604 (offset -13 lines).\n1 out of 15 hunks FAILED -- saving rejects to file\nsrc/backend/access/nbtree/nbtpage.c.rej\npatching file src/backend/access/spgist/spgdoinsert.c\n\nIt is a very minor change, so I rebased the patch. Please take a look, if\nthat works for you.\n\n\n\n-- \nIbrar Ahmed",
"msg_date": "Sun, 14 Mar 2021 23:21:02 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Sun, Mar 14, 2021 at 11:51 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n> On Tue, Mar 9, 2021 at 3:31 PM Amul Sul <sulamul@gmail.com> wrote:\n>>\n>> On Thu, Mar 4, 2021 at 11:02 PM Amul Sul <sulamul@gmail.com> wrote:\n>> >\n>> > On Wed, Mar 3, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> > >\n>> > > On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>[....]\n>\n> One of the patch (v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch) from the latest patchset does not apply successfully.\n>\n> http://cfbot.cputube.org/patch_32_2602.log\n>\n> === applying patch ./v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch\n>\n> Hunk #15 succeeded at 2604 (offset -13 lines).\n> 1 out of 15 hunks FAILED -- saving rejects to file src/backend/access/nbtree/nbtpage.c.rej\n> patching file src/backend/access/spgist/spgdoinsert.c\n>\n> It is a very minor change, so I rebased the patch. Please take a look, if that works for you.\n>\n\nThanks, I am getting one more failure for the vacuumlazy.c. on the\nlatest master head(d75288fb27b), I fixed that in attached version.\n\nRegards,\nAmul",
"msg_date": "Mon, 15 Mar 2021 12:55:42 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "I made a few changes and fixes in the attached version.\nThe document patch is now ready for review.\n\nRegards,\nAmul\n\n\n\nOn Mon, Mar 15, 2021 at 12:55 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Sun, Mar 14, 2021 at 11:51 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> >\n> > On Tue, Mar 9, 2021 at 3:31 PM Amul Sul <sulamul@gmail.com> wrote:\n> >>\n> >> On Thu, Mar 4, 2021 at 11:02 PM Amul Sul <sulamul@gmail.com> wrote:\n> >> >\n> >> > On Wed, Mar 3, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> > >\n> >> > > On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >[....]\n> >\n> > One of the patch (v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch) from the latest patchset does not apply successfully.\n> >\n> > http://cfbot.cputube.org/patch_32_2602.log\n> >\n> > === applying patch ./v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch\n> >\n> > Hunk #15 succeeded at 2604 (offset -13 lines).\n> > 1 out of 15 hunks FAILED -- saving rejects to file src/backend/access/nbtree/nbtpage.c.rej\n> > patching file src/backend/access/spgist/spgdoinsert.c\n> >\n> > It is a very minor change, so I rebased the patch. Please take a look, if that works for you.\n> >\n>\n> Thanks, I am getting one more failure for the vacuumlazy.c. on the\n> latest master head(d75288fb27b), I fixed that in attached version.\n>\n> Regards,\n> Amul",
"msg_date": "Mon, 15 Mar 2021 17:45:22 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi all,\nWhile testing this feature with v20-patch, the server is crashing with\nbelow steps.\n\nSteps to reproduce:\n1. Configure master-slave replication setup.\n2. Connect to Slave.\n3. Execute below statements, it will crash the server:\nSELECT pg_prohibit_wal(true);\nSELECT pg_prohibit_wal(false);\n\n-- Slave:\npostgres=# select pg_is_in_recovery();\n pg_is_in_recovery\n-------------------\n t\n(1 row)\n\npostgres=# SELECT pg_prohibit_wal(true);\n pg_prohibit_wal\n-----------------\n\n(1 row)\n\npostgres=# SELECT pg_prohibit_wal(false);\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited\nabnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?>\n\n-- Below are the stack trace:\n[prabhat@localhost bin]$ gdb -q -c /tmp/data_slave/core.35273 postgres\nReading symbols from\n/home/prabhat/PG/PGsrcNew/postgresql/inst/bin/postgres...done.\n[New LWP 35273]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: checkpointer\n '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007fa876233387 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-317.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\nkrb5-libs-1.15.1-50.el7.x86_64 libcom_err-1.42.9-19.el7.x86_64\nlibgcc-4.8.5-44.el7.x86_64 libselinux-2.5-15.el7.x86_64\nopenssl-libs-1.0.2k-21.el7_9.x86_64 pcre-8.32-17.el7.x86_64\nzlib-1.2.7-18.el7.x86_64\n(gdb) bt\n#0 0x00007fa876233387 in raise () from /lib64/libc.so.6\n#1 0x00007fa876234a78 in abort () from /lib64/libc.so.6\n#2 0x0000000000aea31c in ExceptionalCondition (conditionName=0xb8c998\n\"ThisTimeLineID != 0 || IsBootstrapProcessingMode()\",\n errorType=0xb8956d \"FailedAssertion\", fileName=0xb897c0 \"xlog.c\",\nlineNumber=8611) at assert.c:69\n#3 0x0000000000588eb5 in InitXLOGAccess () at xlog.c:8611\n#4 0x0000000000588ae6 in LocalSetXLogInsertAllowed () at xlog.c:8483\n#5 0x00000000005881bb in XLogAcceptWrites (needChkpt=true, xlogreader=0x0,\nEndOfLog=0, EndOfLogTLI=0) at xlog.c:8008\n#6 0x00000000005751ed in ProcessWALProhibitStateChangeRequest () at\nwalprohibit.c:361\n#7 0x000000000088c69f in CheckpointerMain () at checkpointer.c:355\n#8 0x000000000059d7db in AuxiliaryProcessMain (argc=2,\nargv=0x7ffd1290d060) at bootstrap.c:455\n#9 0x000000000089fc5f in StartChildProcess (type=CheckpointerProcess) at\npostmaster.c:5416\n#10 0x000000000089f782 in sigusr1_handler (postgres_signal_arg=10) at\npostmaster.c:5128\n#11 <signal handler called>\n#12 0x00007fa8762f2983 in __select_nocancel () from /lib64/libc.so.6\n#13 0x000000000089b511 in ServerLoop () at postmaster.c:1700\n#14 0x000000000089af00 in PostmasterMain (argc=5, argv=0x15b8460) at\npostmaster.c:1408\n#15 0x000000000079c23a in main (argc=5, argv=0x15b8460) at main.c:209\n(gdb)\n\nkindly let me know if you need more inputs on this.\n\nOn Mon, Mar 15, 2021 at 12:56 PM Amul Sul <sulamul@gmail.com> wrote:\n\n> On Sun, Mar 14, 2021 at 11:51 PM Ibrar Ahmed <ibrar.ahmad@gmail.com>\n> wrote:\n> >\n> > On Tue, Mar 9, 2021 at 3:31 PM Amul Sul <sulamul@gmail.com> wrote:\n> >>\n> >> On Thu, Mar 4, 2021 at 11:02 PM Amul Sul <sulamul@gmail.com> wrote:\n> >> >\n> >> > On Wed, Mar 3, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >> > >\n> >> > > On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> >[....]\n> >\n> > One of the patch\n> (v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch) from the\n> latest patchset does not apply successfully.\n> >\n> > http://cfbot.cputube.org/patch_32_2602.log\n> >\n> > === applying patch\n> ./v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch\n> >\n> > Hunk #15 succeeded at 2604 (offset -13 lines).\n> > 1 out of 15 hunks FAILED -- saving rejects to file\n> src/backend/access/nbtree/nbtpage.c.rej\n> > patching file src/backend/access/spgist/spgdoinsert.c\n> >\n> > It is a very minor change, so I rebased the patch. Please take a look,\n> if that works for you.\n> >\n>\n> Thanks, I am getting one more failure for the vacuumlazy.c. on the\n> latest master head(d75288fb27b), I fixed that in attached version.\n>\n> Regards,\n> Amul\n>\n\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi all,While testing this feature with v20-patch, the server is crashing with below steps.Steps to reproduce:1. Configure master-slave replication setup.2. Connect to Slave.3. Execute below statements, it will crash the server:SELECT pg_prohibit_wal(true);SELECT pg_prohibit_wal(false);-- Slave:postgres=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- t(1 row)postgres=# SELECT pg_prohibit_wal(true); pg_prohibit_wal ----------------- (1 row)postgres=# SELECT pg_prohibit_wal(false);WARNING: terminating connection because of crash of another server processDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.HINT: In a moment you should be able to reconnect to the database and repeat your command.server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?>-- Below are the stack trace:[prabhat@localhost bin]$ gdb -q -c /tmp/data_slave/core.35273 postgres Reading symbols from /home/prabhat/PG/PGsrcNew/postgresql/inst/bin/postgres...done.[New LWP 35273][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib64/libthread_db.so.1\".Core was generated by `postgres: checkpointer '.Program terminated with signal 6, Aborted.#0 0x00007fa876233387 in raise () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.17-317.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-50.el7.x86_64 libcom_err-1.42.9-19.el7.x86_64 libgcc-4.8.5-44.el7.x86_64 libselinux-2.5-15.el7.x86_64 openssl-libs-1.0.2k-21.el7_9.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64(gdb) bt#0 0x00007fa876233387 in raise () from /lib64/libc.so.6#1 0x00007fa876234a78 in abort () from /lib64/libc.so.6#2 0x0000000000aea31c in ExceptionalCondition (conditionName=0xb8c998 \"ThisTimeLineID != 0 || IsBootstrapProcessingMode()\", errorType=0xb8956d \"FailedAssertion\", fileName=0xb897c0 \"xlog.c\", lineNumber=8611) at assert.c:69#3 0x0000000000588eb5 in InitXLOGAccess () at xlog.c:8611#4 0x0000000000588ae6 in LocalSetXLogInsertAllowed () at xlog.c:8483#5 0x00000000005881bb in XLogAcceptWrites (needChkpt=true, xlogreader=0x0, EndOfLog=0, EndOfLogTLI=0) at xlog.c:8008#6 0x00000000005751ed in ProcessWALProhibitStateChangeRequest () at walprohibit.c:361#7 0x000000000088c69f in CheckpointerMain () at checkpointer.c:355#8 0x000000000059d7db in AuxiliaryProcessMain (argc=2, argv=0x7ffd1290d060) at bootstrap.c:455#9 0x000000000089fc5f in StartChildProcess (type=CheckpointerProcess) at postmaster.c:5416#10 0x000000000089f782 in sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5128#11 <signal handler called>#12 0x00007fa8762f2983 in __select_nocancel () from /lib64/libc.so.6#13 0x000000000089b511 in ServerLoop () at postmaster.c:1700#14 0x000000000089af00 in PostmasterMain (argc=5, argv=0x15b8460) at postmaster.c:1408#15 0x000000000079c23a in main (argc=5, argv=0x15b8460) at main.c:209(gdb) kindly let me know if you need more inputs on this.On Mon, Mar 15, 2021 at 12:56 PM Amul Sul <sulamul@gmail.com> wrote:On Sun, Mar 14, 2021 at 11:51 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n> On Tue, Mar 9, 2021 at 3:31 PM Amul Sul <sulamul@gmail.com> wrote:\n>>\n>> On Thu, Mar 4, 2021 at 11:02 PM Amul Sul <sulamul@gmail.com> wrote:\n>> >\n>> > On Wed, Mar 3, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> > >\n>> > > On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>[....]\n>\n> One of the patch (v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch) from the latest patchset does not apply successfully.\n>\n> http://cfbot.cputube.org/patch_32_2602.log\n>\n> === applying patch ./v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch\n>\n> Hunk #15 succeeded at 2604 (offset -13 lines).\n> 1 out of 15 hunks FAILED -- saving rejects to file src/backend/access/nbtree/nbtpage.c.rej\n> patching file src/backend/access/spgist/spgdoinsert.c\n>\n> It is a very minor change, so I rebased the patch. Please take a look, if that works for you.\n>\n\nThanks, I am getting one more failure for the vacuumlazy.c. on the\nlatest master head(d75288fb27b), I fixed that in attached version.\n\nRegards,\nAmul\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 19 Mar 2021 19:16:55 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 7:17 PM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n>\n> Hi all,\n> While testing this feature with v20-patch, the server is crashing with below steps.\n>\n> Steps to reproduce:\n> 1. Configure master-slave replication setup.\n> 2. Connect to Slave.\n> 3. Execute below statements, it will crash the server:\n> SELECT pg_prohibit_wal(true);\n> SELECT pg_prohibit_wal(false);\n>\n> -- Slave:\n> postgres=# select pg_is_in_recovery();\n> pg_is_in_recovery\n> -------------------\n> t\n> (1 row)\n>\n> postgres=# SELECT pg_prohibit_wal(true);\n> pg_prohibit_wal\n> -----------------\n>\n> (1 row)\n>\n> postgres=# SELECT pg_prohibit_wal(false);\n> WARNING: terminating connection because of crash of another server process\n> DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?>\n\nThanks Prabhat.\n\nThe assertion failure is due to wrong assumptions for the flag that were used\nfor the XLogAcceptWrites() call. In the case of standby, the startup process\nnever reached the place where it could call XLogAcceptWrites() and update the\nrespective flag. Due to this flag value, pg_prohibit_wal() function does\nalter system state in recovery state which is incorrect.\n\nIn the attached function I took enum value for that flag so that\npg_prohibit_wal() is only allowed in the recovery mode, iff that flag indicates\nthat XLogAcceptWrites() has been skipped previously.\n\nRegards,\nAmul",
"msg_date": "Mon, 22 Mar 2021 12:13:43 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is the rebase version for the latest master head(commit # 9f6f1f9b8e6).\n\nRegards,\nAmul\n\n\nOn Mon, Mar 22, 2021 at 12:13 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Fri, Mar 19, 2021 at 7:17 PM Prabhat Sahu\n> <prabhat.sahu@enterprisedb.com> wrote:\n> >\n> > Hi all,\n> > While testing this feature with v20-patch, the server is crashing with below steps.\n> >\n> > Steps to reproduce:\n> > 1. Configure master-slave replication setup.\n> > 2. Connect to Slave.\n> > 3. Execute below statements, it will crash the server:\n> > SELECT pg_prohibit_wal(true);\n> > SELECT pg_prohibit_wal(false);\n> >\n> > -- Slave:\n> > postgres=# select pg_is_in_recovery();\n> > pg_is_in_recovery\n> > -------------------\n> > t\n> > (1 row)\n> >\n> > postgres=# SELECT pg_prohibit_wal(true);\n> > pg_prohibit_wal\n> > -----------------\n> >\n> > (1 row)\n> >\n> > postgres=# SELECT pg_prohibit_wal(false);\n> > WARNING: terminating connection because of crash of another server process\n> > DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> > HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Failed.\n> > !?>\n>\n> Thanks Prabhat.\n>\n> The assertion failure is due to wrong assumptions for the flag that were used\n> for the XLogAcceptWrites() call. In the case of standby, the startup process\n> never reached the place where it could call XLogAcceptWrites() and update the\n> respective flag. Due to this flag value, pg_prohibit_wal() function does\n> alter system state in recovery state which is incorrect.\n>\n> In the attached function I took enum value for that flag so that\n> pg_prohibit_wal() is only allowed in the recovery mode, iff that flag indicates\n> that XLogAcceptWrites() has been skipped previously.\n>\n> Regards,\n> Amul",
"msg_date": "Mon, 5 Apr 2021 11:01:16 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 11:02 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Attached is the rebase version for the latest master head(commit # 9f6f1f9b8e6).\n\nSome minor comments on 0001:\nIsn't it \"might not be running\"?\n+ errdetail(\"Checkpointer might not running.\"),\n\nIsn't it \"Try again after sometime\"?\n+ errhint(\"Try after sometime again.\")));\n\nCan we have ereport(DEBUG1 just to be consistent(although it doesn't\nmake any difference from elog(DEBUG1) with the new log messages\nintroduced in the patch?\n+ elog(DEBUG1, \"waiting for backends to adopt requested WAL\nprohibit state change\");\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Apr 2021 16:45:32 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 4:45 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n\nThanks Bharath for your review.\n\n> On Mon, Apr 5, 2021 at 11:02 AM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Attached is the rebase version for the latest master head(commit # 9f6f1f9b8e6).\n>\n> Some minor comments on 0001:\n> Isn't it \"might not be running\"?\n> + errdetail(\"Checkpointer might not running.\"),\n>\n\nOk, fixed in the attached version.\n\n> Isn't it \"Try again after sometime\"?\n> + errhint(\"Try after sometime again.\")));\n>\n\nOk, done.\n\n> Can we have ereport(DEBUG1 just to be consistent(although it doesn't\n> make any difference from elog(DEBUG1) with the new log messages\n> introduced in the patch?\n> + elog(DEBUG1, \"waiting for backends to adopt requested WAL\n> prohibit state change\");\n>\n\nI think it's fine; many existing places have used elog(DEBUG1, ....) too.\n\nRegards,\nAmul",
"msg_date": "Mon, 5 Apr 2021 17:27:17 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Rotten again, attached the rebased version.\n\nRegards,\nAmul\n\nOn Mon, Apr 5, 2021 at 5:27 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Mon, Apr 5, 2021 at 4:45 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n>\n> Thanks Bharath for your review.\n>\n> > On Mon, Apr 5, 2021 at 11:02 AM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > Attached is the rebase version for the latest master head(commit # 9f6f1f9b8e6).\n> >\n> > Some minor comments on 0001:\n> > Isn't it \"might not be running\"?\n> > + errdetail(\"Checkpointer might not running.\"),\n> >\n>\n> Ok, fixed in the attached version.\n>\n> > Isn't it \"Try again after sometime\"?\n> > + errhint(\"Try after sometime again.\")));\n> >\n>\n> Ok, done.\n>\n> > Can we have ereport(DEBUG1 just to be consistent(although it doesn't\n> > make any difference from elog(DEBUG1) with the new log messages\n> > introduced in the patch?\n> > + elog(DEBUG1, \"waiting for backends to adopt requested WAL\n> > prohibit state change\");\n> >\n>\n> I think it's fine; many existing places have used elog(DEBUG1, ....) too.\n>\n> Regards,\n> Amul",
"msg_date": "Wed, 7 Apr 2021 12:38:23 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Rebased again.\n\nOn Wed, Apr 7, 2021 at 12:38 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Rotten again, attached the rebased version.\n>\n> Regards,\n> Amul\n>\n> On Mon, Apr 5, 2021 at 5:27 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Mon, Apr 5, 2021 at 4:45 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> >\n> > Thanks Bharath for your review.\n> >\n> > > On Mon, Apr 5, 2021 at 11:02 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > >\n> > > > Attached is the rebase version for the latest master head(commit # 9f6f1f9b8e6).\n> > >\n> > > Some minor comments on 0001:\n> > > Isn't it \"might not be running\"?\n> > > + errdetail(\"Checkpointer might not running.\"),\n> > >\n> >\n> > Ok, fixed in the attached version.\n> >\n> > > Isn't it \"Try again after sometime\"?\n> > > + errhint(\"Try after sometime again.\")));\n> > >\n> >\n> > Ok, done.\n> >\n> > > Can we have ereport(DEBUG1 just to be consistent(although it doesn't\n> > > make any difference from elog(DEBUG1) with the new log messages\n> > > introduced in the patch?\n> > > + elog(DEBUG1, \"waiting for backends to adopt requested WAL\n> > > prohibit state change\");\n> > >\n> >\n> > I think it's fine; many existing places have used elog(DEBUG1, ....) too.\n> >\n> > Regards,\n> > Amul",
"msg_date": "Mon, 12 Apr 2021 19:34:15 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 10:04 AM Amul Sul <sulamul@gmail.com> wrote:\n> Rebased again.\n\nI started to look at this today, and didn't get very far, but I have a\nfew comments. The main one is that I don't think this patch implements\nthe design proposed in\nhttps://www.postgresql.org/message-id/CA+TgmoZ=CCTbAXxMTYZoGXEgqzOz9smkBWrDpsacpjvFcGCuaw@mail.gmail.com\n\nThe first part of that proposal said this:\n\n\"1. If the server starts up and is read-only and\nArchiveRecoveryRequested, clear the read-only state in memory and also\nin the control file, log a message saying that this has been done, and\nproceed. This makes some other cases simpler to deal with.\"\n\nAs I read it, the patch clears the read-only state in memory, does not\nclear it in the control file, and does not log a message.\n\nThe second part of this proposal was:\n\n\"2. Create a new function with a name like XLogAcceptWrites(). Move the\nfollowing things from StartupXLOG() into that function: (1) the call\nto UpdateFullPageWrites(), (2) the following block of code that does\neither CreateEndOfRecoveryRecord() or RequestCheckpoint() or\nCreateCheckPoint(), (3) the next block of code that runs\nrecovery_end_command, (4) the call to XLogReportParameters(), and (5)\nthe call to CompleteCommitTsInitialization(). Call the new function\nfrom the place where we now call XLogReportParameters(). This would\nmean that (1)-(3) happen later than they do now, which might require\nsome adjustments.\"\n\nNow you moved that code, but you also moved (6)\nCompleteCommitTsInitialization(), (7) setting the control file to\nDB_IN_PRODUCTION, (8) setting the state to RECOVERY_STATE_DONE, and\n(9) requesting a checkpoint if we were just promoted. That's not what\nwas proposed. One result of this is that the server now thinks it's in\nrecovery even after the startup process has exited.\nRecoveryInProgress() is still returning true everywhere. But that is\ninconsistent with what Andres and I were recommending in\nhttp://postgr.es/m/CA+TgmoZYQN=rcYE-iXWnjdvMAoH+7Jaqsif3U2k8xqXipBaS7A@mail.gmail.com\n\nI also noticed that 0001 does not compile without 0002, so the\nseparation into multiple patches is not clean. I would actually\nsuggest that the first patch in the series should just create\nXLogAcceptWrites() with the minimum amount of adjustment to make that\nwork. That would potentially let us commit that change independently,\nwhich would be good, because then if we accidentally break something,\nit'll be easier to pin down to that particular change instead of being\nmixed with everything else this needs to change.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 May 2021 15:52:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, May 7, 2021 at 1:23 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 10:04 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Rebased again.\n>\n> I started to look at this today, and didn't get very far, but I have a\n> few comments. The main one is that I don't think this patch implements\n> the design proposed in\n> https://www.postgresql.org/message-id/CA+TgmoZ=CCTbAXxMTYZoGXEgqzOz9smkBWrDpsacpjvFcGCuaw@mail.gmail.com\n>\n> The first part of that proposal said this:\n>\n> \"1. If the server starts up and is read-only and\n> ArchiveRecoveryRequested, clear the read-only state in memory and also\n> in the control file, log a message saying that this has been done, and\n> proceed. This makes some other cases simpler to deal with.\"\n>\n> As I read it, the patch clears the read-only state in memory, does not\n> clear it in the control file, and does not log a message.\n>\n\nThe state in the control file also gets cleared. Though, after\nclearing in memory the state patch doesn't really do the immediate\nchange to the control file, it relies on the next UpdateControlFile()\nto do that.\n\nRegarding log message I think I have skipped that intentionally, to\navoid confusing log as \"system is now read write\" when we do start as\nhot-standby which is not really read-write.\n\n> The second part of this proposal was:\n>\n> \"2. Create a new function with a name like XLogAcceptWrites(). Move the\n> following things from StartupXLOG() into that function: (1) the call\n> to UpdateFullPageWrites(), (2) the following block of code that does\n> either CreateEndOfRecoveryRecord() or RequestCheckpoint() or\n> CreateCheckPoint(), (3) the next block of code that runs\n> recovery_end_command, (4) the call to XLogReportParameters(), and (5)\n> the call to CompleteCommitTsInitialization(). Call the new function\n> from the place where we now call XLogReportParameters(). This would\n> mean that (1)-(3) happen later than they do now, which might require\n> some adjustments.\"\n>\n> Now you moved that code, but you also moved (6)\n> CompleteCommitTsInitialization(), (7) setting the control file to\n> DB_IN_PRODUCTION, (8) setting the state to RECOVERY_STATE_DONE, and\n> (9) requesting a checkpoint if we were just promoted. That's not what\n> was proposed. One result of this is that the server now thinks it's in\n> recovery even after the startup process has exited.\n> RecoveryInProgress() is still returning true everywhere. But that is\n> inconsistent with what Andres and I were recommending in\n> http://postgr.es/m/CA+TgmoZYQN=rcYE-iXWnjdvMAoH+7Jaqsif3U2k8xqXipBaS7A@mail.gmail.com\n>\n\nRegarding modified approach, I tried to explain that why I did\nthis in http://postgr.es/m/CAAJ_b96Yb4jaW6oU1bVYEBaf=TQ-QL+mMT1ExfwvNZEr7XRyoQ@mail.gmail.com\n\n> I also noticed that 0001 does not compile without 0002, so the\n> separation into multiple patches is not clean. I would actually\n> suggest that the first patch in the series should just create\n> XLogAcceptWrites() with the minimum amount of adjustment to make that\n> work. That would potentially let us commit that change independently,\n> which would be good, because then if we accidentally break something,\n> it'll be easier to pin down to that particular change instead of being\n> mixed with everything else this needs to change.\n>\n\nOk, I will try in the next version.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Sun, 9 May 2021 10:56:07 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Sun, May 9, 2021 at 1:26 AM Amul Sul <sulamul@gmail.com> wrote:\n> The state in the control file also gets cleared. Though, after\n> clearing in memory the state patch doesn't really do the immediate\n> change to the control file, it relies on the next UpdateControlFile()\n> to do that.\n\nBut when will that happen? If you're relying on some very nearby code,\nthat might be OK, but perhaps a comment is in order. If you're just\nthinking it's going to happen eventually, I think that's not good\nenough.\n\n> Regarding log message I think I have skipped that intentionally, to\n> avoid confusing log as \"system is now read write\" when we do start as\n> hot-standby which is not really read-write.\n\nI think the message should not be phrased that way. In fact, I think\nnow that we've moved to calling this pg_prohibit_wal() rather than\nALTER SYSTEM READ ONLY, a lot of messages need to be rethought, and\nmaybe some comments and function names as well. Perhaps something\nlike:\n\nsystem is read only -> WAL is now prohibited\nsystem is read write -> WAL is no longer prohibited\n\nAnd then for this particular case, maybe something like:\n\nclearing WAL prohibition because the system is in archive recovery\n\n> > The second part of this proposal was:\n> >\n> > \"2. Create a new function with a name like XLogAcceptWrites(). Move the\n> > following things from StartupXLOG() into that function: (1) the call\n> > to UpdateFullPageWrites(), (2) the following block of code that does\n> > either CreateEndOfRecoveryRecord() or RequestCheckpoint() or\n> > CreateCheckPoint(), (3) the next block of code that runs\n> > recovery_end_command, (4) the call to XLogReportParameters(), and (5)\n> > the call to CompleteCommitTsInitialization(). Call the new function\n> > from the place where we now call XLogReportParameters(). This would\n> > mean that (1)-(3) happen later than they do now, which might require\n> > some adjustments.\"\n> >\n> > Now you moved that code, but you also moved (6)\n> > CompleteCommitTsInitialization(), (7) setting the control file to\n> > DB_IN_PRODUCTION, (8) setting the state to RECOVERY_STATE_DONE, and\n> > (9) requesting a checkpoint if we were just promoted. That's not what\n> > was proposed. One result of this is that the server now thinks it's in\n> > recovery even after the startup process has exited.\n> > RecoveryInProgress() is still returning true everywhere. But that is\n> > inconsistent with what Andres and I were recommending in\n> > http://postgr.es/m/CA+TgmoZYQN=rcYE-iXWnjdvMAoH+7Jaqsif3U2k8xqXipBaS7A@mail.gmail.com\n>\n> Regarding modified approach, I tried to explain that why I did\n> this in http://postgr.es/m/CAAJ_b96Yb4jaW6oU1bVYEBaf=TQ-QL+mMT1ExfwvNZEr7XRyoQ@mail.gmail.com\n\nI am not able to understand what problem you are seeing there. If\nwe're in crash recovery, then nobody can connect to the database, so\nthere can't be any concurrent activity. If we're in archive recovery,\nwe now clear the WAL-is-prohibited flag so that we will go read-write\ndirectly at the end of recovery. We can and should refuse any effort\nto call pg_prohibit_wal() during recovery. If we reached the end of\ncrash recovery and are now permitting read-only connections, why would\nanyone be able to write WAL before the system has been changed to\nread-write? If that can happen, it's a bug, not a reason to change the\ndesign.\n\nMaybe your concern here is about ordering: the process that is going\nto run XLogAcceptWrites() needs to allow xlog writes locally before we\ntell other backends that they also can xlog writes; otherwise, some\nother records could slip in before UpdateFullPageWrites() and similar\nhave run, which we probably don't want. But that's why\nLocalSetXLogInsertAllowed() was invented, and if it doesn't quite do\nwhat we need in this situation, we should be able to tweak it so it\ndoes.\n\nIf your concern is something else, can you spell it out for me again\nbecause I'm not getting it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 May 2021 11:51:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, May 10, 2021 at 9:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, May 9, 2021 at 1:26 AM Amul Sul <sulamul@gmail.com> wrote:\n> > The state in the control file also gets cleared. Though, after\n> > clearing in memory the state patch doesn't really do the immediate\n> > change to the control file, it relies on the next UpdateControlFile()\n> > to do that.\n>\n> But when will that happen? If you're relying on some very nearby code,\n> that might be OK, but perhaps a comment is in order. If you're just\n> thinking it's going to happen eventually, I think that's not good\n> enough.\n>\n\nOk.\n\n> > Regarding log message I think I have skipped that intentionally, to\n> > avoid confusing log as \"system is now read write\" when we do start as\n> > hot-standby which is not really read-write.\n>\n> I think the message should not be phrased that way. In fact, I think\n> now that we've moved to calling this pg_prohibit_wal() rather than\n> ALTER SYSTEM READ ONLY, a lot of messages need to be rethought, and\n> maybe some comments and function names as well. Perhaps something\n> like:\n>\n> system is read only -> WAL is now prohibited\n> system is read write -> WAL is no longer prohibited\n>\n> And then for this particular case, maybe something like:\n>\n> clearing WAL prohibition because the system is in archive recovery\n>\n\nOk, thanks for the suggestions.\n\n> > > The second part of this proposal was:\n> > >\n> > > \"2. Create a new function with a name like XLogAcceptWrites(). Move the\n> > > following things from StartupXLOG() into that function: (1) the call\n> > > to UpdateFullPageWrites(), (2) the following block of code that does\n> > > either CreateEndOfRecoveryRecord() or RequestCheckpoint() or\n> > > CreateCheckPoint(), (3) the next block of code that runs\n> > > recovery_end_command, (4) the call to XLogReportParameters(), and (5)\n> > > the call to CompleteCommitTsInitialization(). Call the new function\n> > > from the place where we now call XLogReportParameters(). This would\n> > > mean that (1)-(3) happen later than they do now, which might require\n> > > some adjustments.\"\n> > >\n> > > Now you moved that code, but you also moved (6)\n> > > CompleteCommitTsInitialization(), (7) setting the control file to\n> > > DB_IN_PRODUCTION, (8) setting the state to RECOVERY_STATE_DONE, and\n> > > (9) requesting a checkpoint if we were just promoted. That's not what\n> > > was proposed. One result of this is that the server now thinks it's in\n> > > recovery even after the startup process has exited.\n> > > RecoveryInProgress() is still returning true everywhere. But that is\n> > > inconsistent with what Andres and I were recommending in\n> > > http://postgr.es/m/CA+TgmoZYQN=rcYE-iXWnjdvMAoH+7Jaqsif3U2k8xqXipBaS7A@mail.gmail.com\n> >\n> > Regarding modified approach, I tried to explain that why I did\n> > this in http://postgr.es/m/CAAJ_b96Yb4jaW6oU1bVYEBaf=TQ-QL+mMT1ExfwvNZEr7XRyoQ@mail.gmail.com\n>\n> I am not able to understand what problem you are seeing there. If\n> we're in crash recovery, then nobody can connect to the database, so\n> there can't be any concurrent activity. If we're in archive recovery,\n> we now clear the WAL-is-prohibited flag so that we will go read-write\n> directly at the end of recovery. We can and should refuse any effort\n> to call pg_prohibit_wal() during recovery. If we reached the end of\n> crash recovery and are now permitting read-only connections, why would\n> anyone be able to write WAL before the system has been changed to\n> read-write? If that can happen, it's a bug, not a reason to change the\n> design.\n>\n> Maybe your concern here is about ordering: the process that is going\n> to run XLogAcceptWrites() needs to allow xlog writes locally before we\n> tell other backends that they also can xlog writes; otherwise, some\n> other records could slip in before UpdateFullPageWrites() and similar\n> have run, which we probably don't want. But that's why\n> LocalSetXLogInsertAllowed() was invented, and if it doesn't quite do\n> what we need in this situation, we should be able to tweak it so it\n> does.\n>\n\nYes, we don't want any write slip in before UpdateFullPageWrites().\nRecently[1], we have decided to let the Checkpointed process call\nXLogAcceptWrites() unconditionally.\n\nHere problem is that when a backend executes the\npg_prohibit_wal(false) function to make the system read-write, the wal\nprohibited state is set to inprogress(ie.\nWALPROHIBIT_STATE_GOING_READ_WRITE) and then Checkpointer is signaled.\nNext, Checkpointer will convey this system change to all existing\nbackends using a global barrier, and after that final wal prohibited\nstate is set to the read-write(i.e. WALPROHIBIT_STATE_READ_WRITE).\nWhile Checkpointer is in the progress of conveying this global\nbarrier, any new backend can connect at that time and can write a new\nrecord because the inprogress read-write state is equivalent to the\nfinal read-write state iff LocalXLogInsertAllowed != 0 for that\nbackend. And, that new record could slip in before or in between\nrecords to be written by XLogAcceptWrites().\n\n1] http://postgr.es/m/CA+TgmoZYQN=rcYE-iXWnjdvMAoH+7Jaqsif3U2k8xqXipBaS7A@mail.gmail.com\n\nRegards,\nAmul\n\n\n",
"msg_date": "Mon, 10 May 2021 22:25:14 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, May 10, 2021 at 10:25 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Yes, we don't want any write slip in before UpdateFullPageWrites().\n> Recently[1], we have decided to let the Checkpointed process call\n> XLogAcceptWrites() unconditionally.\n>\n> Here problem is that when a backend executes the\n> pg_prohibit_wal(false) function to make the system read-write, the wal\n> prohibited state is set to inprogress(ie.\n> WALPROHIBIT_STATE_GOING_READ_WRITE) and then Checkpointer is signaled.\n> Next, Checkpointer will convey this system change to all existing\n> backends using a global barrier, and after that final wal prohibited\n> state is set to the read-write(i.e. WALPROHIBIT_STATE_READ_WRITE).\n> While Checkpointer is in the progress of conveying this global\n> barrier, any new backend can connect at that time and can write a new\n> record because the inprogress read-write state is equivalent to the\n> final read-write state iff LocalXLogInsertAllowed != 0 for that\n> backend. And, that new record could slip in before or in between\n> records to be written by XLogAcceptWrites().\n>\n> 1] http://postgr.es/m/CA+TgmoZYQN=rcYE-iXWnjdvMAoH+7Jaqsif3U2k8xqXipBaS7A@mail.gmail.com\n\nBut, IIUC, once the state is set to WALPROHIBIT_STATE_GOING_READ_WRITE\nand signaled to the checkpointer. The checkpointer should first call\nXLogAcceptWrites and then it should inform other backends through the\nglobal barrier? Are we worried that if we have written the WAL in\nXLogAcceptWrites but later if we could not set the state to\nWALPROHIBIT_STATE_READ_WRITE? Then maybe we can inform all the\nbackend first but before setting the state to\nWALPROHIBIT_STATE_READ_WRITE, we can call XLogAcceptWrites?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 11:32:43 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 11:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 10, 2021 at 10:25 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Yes, we don't want any write slip in before UpdateFullPageWrites().\n> > Recently[1], we have decided to let the Checkpointed process call\n> > XLogAcceptWrites() unconditionally.\n> >\n> > Here problem is that when a backend executes the\n> > pg_prohibit_wal(false) function to make the system read-write, the wal\n> > prohibited state is set to inprogress(ie.\n> > WALPROHIBIT_STATE_GOING_READ_WRITE) and then Checkpointer is signaled.\n> > Next, Checkpointer will convey this system change to all existing\n> > backends using a global barrier, and after that final wal prohibited\n> > state is set to the read-write(i.e. WALPROHIBIT_STATE_READ_WRITE).\n> > While Checkpointer is in the progress of conveying this global\n> > barrier, any new backend can connect at that time and can write a new\n> > record because the inprogress read-write state is equivalent to the\n> > final read-write state iff LocalXLogInsertAllowed != 0 for that\n> > backend. And, that new record could slip in before or in between\n> > records to be written by XLogAcceptWrites().\n> >\n> > 1] http://postgr.es/m/CA+TgmoZYQN=rcYE-iXWnjdvMAoH+7Jaqsif3U2k8xqXipBaS7A@mail.gmail.com\n>\n> But, IIUC, once the state is set to WALPROHIBIT_STATE_GOING_READ_WRITE\n> and signaled to the checkpointer. The checkpointer should first call\n> XLogAcceptWrites and then it should inform other backends through the\n> global barrier? Are we worried that if we have written the WAL in\n> XLogAcceptWrites but later if we could not set the state to\n> WALPROHIBIT_STATE_READ_WRITE? Then maybe we can inform all the\n> backend first but before setting the state to\n> WALPROHIBIT_STATE_READ_WRITE, we can call XLogAcceptWrites?\n>\n\nI get why you think that, I wasn't very precise in briefing the problem.\n\nAny new backend that gets connected right after the shared memory\nstate changes to WALPROHIBIT_STATE_GOING_READ_WRITE will be by\ndefault allowed to do the WAL writes. Such backends can perform write\noperation before the checkpointer does the XLogAcceptWrites(). Also,\npossible that a backend could connect at the same time checkpointer\nperforming XLogAcceptWrites() and can write a wal.\n\nSo, having XLogAcceptWrites() before does not really solve my concern.\nNote that the previous patch XLogAcceptWrites() does get called before\nglobal barrier emission.\n\nPlease let me know if it is not yet cleared to you, thanks.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 11 May 2021 14:15:32 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 2:16 PM Amul Sul <sulamul@gmail.com> wrote:\n\n> I get why you think that, I wasn't very precise in briefing the problem.\n>\n> Any new backend that gets connected right after the shared memory\n> state changes to WALPROHIBIT_STATE_GOING_READ_WRITE will be by\n> default allowed to do the WAL writes. Such backends can perform write\n> operation before the checkpointer does the XLogAcceptWrites().\n\nOkay, make sense now. But my next question is why do we allow backends\nto write WAL in WALPROHIBIT_STATE_GOING_READ_WRITE state? why don't we\nwait until the shared memory state is changed to\nWALPROHIBIT_STATE_READ_WRITE?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 14:26:19 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 2:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 2:16 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> > I get why you think that, I wasn't very precise in briefing the problem.\n> >\n> > Any new backend that gets connected right after the shared memory\n> > state changes to WALPROHIBIT_STATE_GOING_READ_WRITE will be by\n> > default allowed to do the WAL writes. Such backends can perform write\n> > operation before the checkpointer does the XLogAcceptWrites().\n>\n> Okay, make sense now. But my next question is why do we allow backends\n> to write WAL in WALPROHIBIT_STATE_GOING_READ_WRITE state? why don't we\n> wait until the shared memory state is changed to\n> WALPROHIBIT_STATE_READ_WRITE?\n>\n\nOk, good question.\n\nNow let's first try to understand the Checkpointer's work.\n\nWhen Checkpointer sees the wal prohibited state is an in-progress state, then\nit first emits the global barrier and waits until all backers absorb that.\nAfter that it set the final requested WAL prohibit state.\n\nWhen other backends absorb those barriers then appropriate action is taken\n(e.g. abort the read-write transaction if moving to read-only) by them. Also,\nLocalXLogInsertAllowed flags get reset in it and that backend needs to call\nXLogInsertAllowed() to get the right value for it, which further decides WAL\nwrites permitted or prohibited.\n\nConsider an example that the system is trying to change to read-write and for\nthat wal prohibited state is set to WALPROHIBIT_STATE_GOING_READ_WRITE before\nCheckpointer starts its work. If we want to treat that system as read-only for\nthe WALPROHIBIT_STATE_GOING_READ_WRITE state as well. Then we might need to\nthink about the behavior of the backend that has absorbed the barrier and reset\nthe LocalXLogInsertAllowed flag. That backend eventually going to call\nXLogInsertAllowed() to get the actual value for it and by seeing the current\nstate as WALPROHIBIT_STATE_GOING_READ_WRITE, it will set LocalXLogInsertAllowed\nagain same as it was before for the read-only state.\n\nNow the question is when this value should get reset again so that backend can\nbe read-write? We are done with a barrier and that backend never going to come\nback to read-write again.\n\nOne solution, I think, is to set the final state before emitting the barrier\nbut as per the current design that should get set after all barrier processing.\nLet's see what Robert says on this.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 11 May 2021 15:38:18 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 3:38 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 2:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 2:16 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > > I get why you think that, I wasn't very precise in briefing the problem.\n> > >\n> > > Any new backend that gets connected right after the shared memory\n> > > state changes to WALPROHIBIT_STATE_GOING_READ_WRITE will be by\n> > > default allowed to do the WAL writes. Such backends can perform write\n> > > operation before the checkpointer does the XLogAcceptWrites().\n> >\n> > Okay, make sense now. But my next question is why do we allow backends\n> > to write WAL in WALPROHIBIT_STATE_GOING_READ_WRITE state? why don't we\n> > wait until the shared memory state is changed to\n> > WALPROHIBIT_STATE_READ_WRITE?\n> >\n>\n> Ok, good question.\n>\n> Now let's first try to understand the Checkpointer's work.\n>\n> When Checkpointer sees the wal prohibited state is an in-progress state, then\n> it first emits the global barrier and waits until all backers absorb that.\n> After that it set the final requested WAL prohibit state.\n>\n> When other backends absorb those barriers then appropriate action is taken\n> (e.g. abort the read-write transaction if moving to read-only) by them. Also,\n> LocalXLogInsertAllowed flags get reset in it and that backend needs to call\n> XLogInsertAllowed() to get the right value for it, which further decides WAL\n> writes permitted or prohibited.\n>\n> Consider an example that the system is trying to change to read-write and for\n> that wal prohibited state is set to WALPROHIBIT_STATE_GOING_READ_WRITE before\n> Checkpointer starts its work. If we want to treat that system as read-only for\n> the WALPROHIBIT_STATE_GOING_READ_WRITE state as well. Then we might need to\n> think about the behavior of the backend that has absorbed the barrier and reset\n> the LocalXLogInsertAllowed flag. That backend eventually going to call\n> XLogInsertAllowed() to get the actual value for it and by seeing the current\n> state as WALPROHIBIT_STATE_GOING_READ_WRITE, it will set LocalXLogInsertAllowed\n> again same as it was before for the read-only state.\n\nI might be missing something, but assume the behavior should be like this\n\n1. If the state is getting changed from WALPROHIBIT_STATE_READ_WRITE\n-> WALPROHIBIT_STATE_READ_ONLY, then as soon as the backend process\nthe barrier, we can immediately abort any read-write transaction(and\nstop allowing WAL writing), because once we ensure that all session\nhas responded that now they have no read-write transaction then we can\nsafely change the state from WALPROHIBIT_STATE_GOING_READ_ONLY to\nWALPROHIBIT_STATE_READ_ONLY.\n\n2. OTOH, if we are changing from WALPROHIBIT_STATE_READ_ONLY ->\nWALPROHIBIT_STATE_READ_WRITE, then we don't need to allow the backend\nto consider the system as read-write, instead, we should wait until\nthe shared state is changed to WALPROHIBIT_STATE_READ_WRITE.\n\nSo your problem is that on receiving the barrier we need to call\nLocalXLogInsertAllowed() from the backend, but how does that matter?\nyou can still make IsWALProhibited() return true.\n\nI don't know the complete code so I might be missing something but at\nleast that is what I would expect from the design POV.\n\n\nOther than this point, I think the state names READ_ONLY, READ_WRITE\nare a bit confusing no? because actually, these states represent\nwhether WAL is allowed or not, but READ_ONLY, READ_WRITE seems like we\nare putting the system under a Read-only state. For example, if you\nare doing some write operation on an unlogged table will be allowed, I\nguess because that will not generate the WAL until you commit (because\ncommit generates WAL) right? so practically, we are just blocking the\nWAL, not the write operation.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 16:12:53 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 4:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 3:38 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 2:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, May 11, 2021 at 2:16 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > > I get why you think that, I wasn't very precise in briefing the problem.\n> > > >\n> > > > Any new backend that gets connected right after the shared memory\n> > > > state changes to WALPROHIBIT_STATE_GOING_READ_WRITE will be by\n> > > > default allowed to do the WAL writes. Such backends can perform write\n> > > > operation before the checkpointer does the XLogAcceptWrites().\n> > >\n> > > Okay, make sense now. But my next question is why do we allow backends\n> > > to write WAL in WALPROHIBIT_STATE_GOING_READ_WRITE state? why don't we\n> > > wait until the shared memory state is changed to\n> > > WALPROHIBIT_STATE_READ_WRITE?\n> > >\n> >\n> > Ok, good question.\n> >\n> > Now let's first try to understand the Checkpointer's work.\n> >\n> > When Checkpointer sees the wal prohibited state is an in-progress state, then\n> > it first emits the global barrier and waits until all backers absorb that.\n> > After that it set the final requested WAL prohibit state.\n> >\n> > When other backends absorb those barriers then appropriate action is taken\n> > (e.g. abort the read-write transaction if moving to read-only) by them. Also,\n> > LocalXLogInsertAllowed flags get reset in it and that backend needs to call\n> > XLogInsertAllowed() to get the right value for it, which further decides WAL\n> > writes permitted or prohibited.\n> >\n> > Consider an example that the system is trying to change to read-write and for\n> > that wal prohibited state is set to WALPROHIBIT_STATE_GOING_READ_WRITE before\n> > Checkpointer starts its work. If we want to treat that system as read-only for\n> > the WALPROHIBIT_STATE_GOING_READ_WRITE state as well. Then we might need to\n> > think about the behavior of the backend that has absorbed the barrier and reset\n> > the LocalXLogInsertAllowed flag. That backend eventually going to call\n> > XLogInsertAllowed() to get the actual value for it and by seeing the current\n> > state as WALPROHIBIT_STATE_GOING_READ_WRITE, it will set LocalXLogInsertAllowed\n> > again same as it was before for the read-only state.\n>\n> I might be missing something, but assume the behavior should be like this\n>\n> 1. If the state is getting changed from WALPROHIBIT_STATE_READ_WRITE\n> -> WALPROHIBIT_STATE_READ_ONLY, then as soon as the backend process\n> the barrier, we can immediately abort any read-write transaction(and\n> stop allowing WAL writing), because once we ensure that all session\n> has responded that now they have no read-write transaction then we can\n> safely change the state from WALPROHIBIT_STATE_GOING_READ_ONLY to\n> WALPROHIBIT_STATE_READ_ONLY.\n>\n\nYes, that's what the current patch is doing from the first patch version.\n\n> 2. OTOH, if we are changing from WALPROHIBIT_STATE_READ_ONLY ->\n> WALPROHIBIT_STATE_READ_WRITE, then we don't need to allow the backend\n> to consider the system as read-write, instead, we should wait until\n> the shared state is changed to WALPROHIBIT_STATE_READ_WRITE.\n>\n\nI am sure that only not enough will have the same issue where\nLocalXLogInsertAllowed gets set the same as the read-only as described in\nmy previous reply.\n\n> So your problem is that on receiving the barrier we need to call\n> LocalXLogInsertAllowed() from the backend, but how does that matter?\n> you can still make IsWALProhibited() return true.\n>\n\nNote that LocalXLogInsertAllowed is a local flag for a backend, not a\nfunction, and in the server code at every place, we don't rely on\nIsWALProhibited() instead we do rely on LocalXLogInsertAllowed\nflags before wal writes and that check made via XLogInsertAllowed().\n\n> I don't know the complete code so I might be missing something but at\n> least that is what I would expect from the design POV.\n>\n>\n> Other than this point, I think the state names READ_ONLY, READ_WRITE\n> are a bit confusing no? because actually, these states represent\n> whether WAL is allowed or not, but READ_ONLY, READ_WRITE seems like we\n> are putting the system under a Read-only state. For example, if you\n> are doing some write operation on an unlogged table will be allowed, I\n> guess because that will not generate the WAL until you commit (because\n> commit generates WAL) right? so practically, we are just blocking the\n> WAL, not the write operation.\n>\n\nThis read-only and read-write are the wal prohibited states though we\nare using for read-only/read-write system in the discussion and the\ncomplete macro name is WALPROHIBIT_STATE_READ_ONLY and\nWALPROHIBIT_STATE_READ_WRITE, I am not sure why that would make\nimplementation confusing.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 11 May 2021 16:49:54 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 4:50 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 4:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I might be missing something, but assume the behavior should be like this\n> >\n> > 1. If the state is getting changed from WALPROHIBIT_STATE_READ_WRITE\n> > -> WALPROHIBIT_STATE_READ_ONLY, then as soon as the backend process\n> > the barrier, we can immediately abort any read-write transaction(and\n> > stop allowing WAL writing), because once we ensure that all session\n> > has responded that now they have no read-write transaction then we can\n> > safely change the state from WALPROHIBIT_STATE_GOING_READ_ONLY to\n> > WALPROHIBIT_STATE_READ_ONLY.\n> >\n>\n> Yes, that's what the current patch is doing from the first patch version.\n>\n> > 2. OTOH, if we are changing from WALPROHIBIT_STATE_READ_ONLY ->\n> > WALPROHIBIT_STATE_READ_WRITE, then we don't need to allow the backend\n> > to consider the system as read-write, instead, we should wait until\n> > the shared state is changed to WALPROHIBIT_STATE_READ_WRITE.\n> >\n>\n> I am sure that only not enough will have the same issue where\n> LocalXLogInsertAllowed gets set the same as the read-only as described in\n> my previous reply.\n\nOkay, but while browsing the code I do not see any direct if condition\nbased on the \"LocalXLogInsertAllowed\" variable, can you point me to\nsome references?\nI only see one if check on this variable and that is in\nXLogInsertAllowed() function, but now in XLogInsertAllowed() function,\nyou are already checking IsWALProhibited. No?\n\n\n> > Other than this point, I think the state names READ_ONLY, READ_WRITE\n> > are a bit confusing no? because actually, these states represent\n> > whether WAL is allowed or not, but READ_ONLY, READ_WRITE seems like we\n> > are putting the system under a Read-only state. For example, if you\n> > are doing some write operation on an unlogged table will be allowed, I\n> > guess because that will not generate the WAL until you commit (because\n> > commit generates WAL) right? so practically, we are just blocking the\n> > WAL, not the write operation.\n> >\n>\n> This read-only and read-write are the wal prohibited states though we\n> are using for read-only/read-write system in the discussion and the\n> complete macro name is WALPROHIBIT_STATE_READ_ONLY and\n> WALPROHIBIT_STATE_READ_WRITE, I am not sure why that would make\n> implementation confusing.\n\nFine, I am not too particular about these names.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 18:47:44 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 6:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 4:50 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 4:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > I might be missing something, but assume the behavior should be like this\n> > >\n> > > 1. If the state is getting changed from WALPROHIBIT_STATE_READ_WRITE\n> > > -> WALPROHIBIT_STATE_READ_ONLY, then as soon as the backend process\n> > > the barrier, we can immediately abort any read-write transaction(and\n> > > stop allowing WAL writing), because once we ensure that all session\n> > > has responded that now they have no read-write transaction then we can\n> > > safely change the state from WALPROHIBIT_STATE_GOING_READ_ONLY to\n> > > WALPROHIBIT_STATE_READ_ONLY.\n> > >\n> >\n> > Yes, that's what the current patch is doing from the first patch version.\n> >\n> > > 2. OTOH, if we are changing from WALPROHIBIT_STATE_READ_ONLY ->\n> > > WALPROHIBIT_STATE_READ_WRITE, then we don't need to allow the backend\n> > > to consider the system as read-write, instead, we should wait until\n> > > the shared state is changed to WALPROHIBIT_STATE_READ_WRITE.\n> > >\n> >\n> > I am sure that only not enough will have the same issue where\n> > LocalXLogInsertAllowed gets set the same as the read-only as described in\n> > my previous reply.\n>\n> Okay, but while browsing the code I do not see any direct if condition\n> based on the \"LocalXLogInsertAllowed\" variable, can you point me to\n> some references?\n> I only see one if check on this variable and that is in\n> XLogInsertAllowed() function, but now in XLogInsertAllowed() function,\n> you are already checking IsWALProhibited. No?\n>\n\nI am not sure I understood this. Where am I checking IsWALProhibited()?\n\nIsWALProhibited() is called by XLogInsertAllowed() once when\nLocalXLogInsertAllowed is in a reset state, and that result will be\ncached in LocalXLogInsertAllowed and will be used in the subsequent\nXLogInsertAllowed() call.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 11 May 2021 18:56:22 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 6:56 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 6:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 4:50 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > On Tue, May 11, 2021 at 4:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > I might be missing something, but assume the behavior should be like this\n> > > >\n> > > > 1. If the state is getting changed from WALPROHIBIT_STATE_READ_WRITE\n> > > > -> WALPROHIBIT_STATE_READ_ONLY, then as soon as the backend process\n> > > > the barrier, we can immediately abort any read-write transaction(and\n> > > > stop allowing WAL writing), because once we ensure that all session\n> > > > has responded that now they have no read-write transaction then we can\n> > > > safely change the state from WALPROHIBIT_STATE_GOING_READ_ONLY to\n> > > > WALPROHIBIT_STATE_READ_ONLY.\n> > > >\n> > >\n> > > Yes, that's what the current patch is doing from the first patch version.\n> > >\n> > > > 2. OTOH, if we are changing from WALPROHIBIT_STATE_READ_ONLY ->\n> > > > WALPROHIBIT_STATE_READ_WRITE, then we don't need to allow the backend\n> > > > to consider the system as read-write, instead, we should wait until\n> > > > the shared state is changed to WALPROHIBIT_STATE_READ_WRITE.\n> > > >\n> > >\n> > > I am sure that only not enough will have the same issue where\n> > > LocalXLogInsertAllowed gets set the same as the read-only as described in\n> > > my previous reply.\n> >\n> > Okay, but while browsing the code I do not see any direct if condition\n> > based on the \"LocalXLogInsertAllowed\" variable, can you point me to\n> > some references?\n> > I only see one if check on this variable and that is in\n> > XLogInsertAllowed() function, but now in XLogInsertAllowed() function,\n> > you are already checking IsWALProhibited. No?\n> >\n>\n> I am not sure I understood this. Where am I checking IsWALProhibited()?\n>\n> IsWALProhibited() is called by XLogInsertAllowed() once when\n> LocalXLogInsertAllowed is in a reset state, and that result will be\n> cached in LocalXLogInsertAllowed and will be used in the subsequent\n> XLogInsertAllowed() call.\n\nOkay, got what you were trying to say. But that can be easily\nfixable, I mean if the state is WALPROHIBIT_STATE_GOING_READ_WRITE\nthen what we can do is don't allow to write the WAL but let's not set\nthe LocalXLogInsertAllowed to 0. So until we are in the intermediate\nstate WALPROHIBIT_STATE_GOING_READ_WRITE, we will always have to rely\non GetWALProhibitState(), I know this will add a performance penalty\nbut this is for the short period until we are in the intermediate\nstate. After that as soon as it will set to\nWALPROHIBIT_STATE_READ_WRITE then the XLogInsertAllowed() will set\nLocalXLogInsertAllowed to 1.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 19:49:53 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, 11 May 2021 at 7:50 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, May 11, 2021 at 6:56 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 6:48 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > >\n> > > On Tue, May 11, 2021 at 4:50 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > >\n> > > > On Tue, May 11, 2021 at 4:13 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > > > > I might be missing something, but assume the behavior should be\n> like this\n> > > > >\n> > > > > 1. If the state is getting changed from\n> WALPROHIBIT_STATE_READ_WRITE\n> > > > > -> WALPROHIBIT_STATE_READ_ONLY, then as soon as the backend process\n> > > > > the barrier, we can immediately abort any read-write\n> transaction(and\n> > > > > stop allowing WAL writing), because once we ensure that all session\n> > > > > has responded that now they have no read-write transaction then we\n> can\n> > > > > safely change the state from WALPROHIBIT_STATE_GOING_READ_ONLY to\n> > > > > WALPROHIBIT_STATE_READ_ONLY.\n> > > > >\n> > > >\n> > > > Yes, that's what the current patch is doing from the first patch\n> version.\n> > > >\n> > > > > 2. OTOH, if we are changing from WALPROHIBIT_STATE_READ_ONLY ->\n> > > > > WALPROHIBIT_STATE_READ_WRITE, then we don't need to allow the\n> backend\n> > > > > to consider the system as read-write, instead, we should wait until\n> > > > > the shared state is changed to WALPROHIBIT_STATE_READ_WRITE.\n> > > > >\n> > > >\n> > > > I am sure that only not enough will have the same issue where\n> > > > LocalXLogInsertAllowed gets set the same as the read-only as\n> described in\n> > > > my previous reply.\n> > >\n> > > Okay, but while browsing the code I do not see any direct if condition\n> > > based on the \"LocalXLogInsertAllowed\" variable, can you point me to\n> > > some references?\n> > > I only see one if check on this variable and that is in\n> > > XLogInsertAllowed() function, but now in XLogInsertAllowed() function,\n> > > you are already checking IsWALProhibited. No?\n> > >\n> >\n> > I am not sure I understood this. Where am I checking IsWALProhibited()?\n> >\n> > IsWALProhibited() is called by XLogInsertAllowed() once when\n> > LocalXLogInsertAllowed is in a reset state, and that result will be\n> > cached in LocalXLogInsertAllowed and will be used in the subsequent\n> > XLogInsertAllowed() call.\n>\n> Okay, got what you were trying to say. But that can be easily\n> fixable, I mean if the state is WALPROHIBIT_STATE_GOING_READ_WRITE\n> then what we can do is don't allow to write the WAL but let's not set\n> the LocalXLogInsertAllowed to 0. So until we are in the intermediate\n> state WALPROHIBIT_STATE_GOING_READ_WRITE, we will always have to rely\n> on GetWALProhibitState(), I know this will add a performance penalty\n> but this is for the short period until we are in the intermediate\n> state. After that as soon as it will set to\n> WALPROHIBIT_STATE_READ_WRITE then the XLogInsertAllowed() will set\n> LocalXLogInsertAllowed to 1.\n\n\nI think I have much easier solution than this, will post that with update\nversion patch set tomorrow.\n\nRegards,\nAmul\n\nOn Tue, 11 May 2021 at 7:50 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:On Tue, May 11, 2021 at 6:56 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 6:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 4:50 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > On Tue, May 11, 2021 at 4:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > I might be missing something, but assume the behavior should be like this\n> > > >\n> > > > 1. If the state is getting changed from WALPROHIBIT_STATE_READ_WRITE\n> > > > -> WALPROHIBIT_STATE_READ_ONLY, then as soon as the backend process\n> > > > the barrier, we can immediately abort any read-write transaction(and\n> > > > stop allowing WAL writing), because once we ensure that all session\n> > > > has responded that now they have no read-write transaction then we can\n> > > > safely change the state from WALPROHIBIT_STATE_GOING_READ_ONLY to\n> > > > WALPROHIBIT_STATE_READ_ONLY.\n> > > >\n> > >\n> > > Yes, that's what the current patch is doing from the first patch version.\n> > >\n> > > > 2. OTOH, if we are changing from WALPROHIBIT_STATE_READ_ONLY ->\n> > > > WALPROHIBIT_STATE_READ_WRITE, then we don't need to allow the backend\n> > > > to consider the system as read-write, instead, we should wait until\n> > > > the shared state is changed to WALPROHIBIT_STATE_READ_WRITE.\n> > > >\n> > >\n> > > I am sure that only not enough will have the same issue where\n> > > LocalXLogInsertAllowed gets set the same as the read-only as described in\n> > > my previous reply.\n> >\n> > Okay, but while browsing the code I do not see any direct if condition\n> > based on the \"LocalXLogInsertAllowed\" variable, can you point me to\n> > some references?\n> > I only see one if check on this variable and that is in\n> > XLogInsertAllowed() function, but now in XLogInsertAllowed() function,\n> > you are already checking IsWALProhibited. No?\n> >\n>\n> I am not sure I understood this. Where am I checking IsWALProhibited()?\n>\n> IsWALProhibited() is called by XLogInsertAllowed() once when\n> LocalXLogInsertAllowed is in a reset state, and that result will be\n> cached in LocalXLogInsertAllowed and will be used in the subsequent\n> XLogInsertAllowed() call.\n\nOkay, got what you were trying to say. But that can be easily\nfixable, I mean if the state is WALPROHIBIT_STATE_GOING_READ_WRITE\nthen what we can do is don't allow to write the WAL but let's not set\nthe LocalXLogInsertAllowed to 0. So until we are in the intermediate\nstate WALPROHIBIT_STATE_GOING_READ_WRITE, we will always have to rely\non GetWALProhibitState(), I know this will add a performance penalty\nbut this is for the short period until we are in the intermediate\nstate. After that as soon as it will set to\nWALPROHIBIT_STATE_READ_WRITE then the XLogInsertAllowed() will set\nLocalXLogInsertAllowed to 1.I think I have much easier solution than this, will post that with update version patch set tomorrow.Regards,Amul",
"msg_date": "Tue, 11 May 2021 20:47:32 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 11:17 AM Amul Sul <sulamul@gmail.com> wrote:\n> I think I have much easier solution than this, will post that with update version patch set tomorrow.\n\nI don't know what you have in mind, but based on this discussion, it\nseems to me that we should just have 5 states instead of 4:\n\n1. WAL is permitted.\n2. WAL is being prohibited but some backends may not know about the change yet.\n3. WAL is prohibited.\n4. WAL is in the process of being permitted but XLogAcceptWrites() may\nnot have been called yet.\n5. WAL is in the process of being permitted and XLogAcceptWrites() has\nbeen called but some backends may not know about the change yet.\n\nIf we're in state #3 and someone does pg_prohibit_wal(false) then we\nenter state #4. The checkpointer calls XLogAcceptWrites(), moves us to\nstate #5, and pushes out a barrier. Then it waits for the barrier to\nbe absorbed and, when it has been, it moves us to state #1. Then if\nsomeone does pg_prohibit_wal(true) we move to state #2. The\ncheckpointer pushes out a barrier and waits for it to be absorbed.\nThen it calls XLogFlush() and afterward moves us to state #3.\n\nWe can have any (reasonable) number of states that we want. There's\nnothing magical about 4.\n\nI also entirely agree with Dilip that we should do some renaming to\nget rid of the read-write/read-only terminology, now that this is no\nlonger part of the syntax. In fact I made the exact same point in my\nlast review. The WALPROHIBIT_STATE_* constants are just one thing of\nmany that needs to be included in that renaming.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 14:24:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, May 11, 2021 at 11:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 11:17 AM Amul Sul <sulamul@gmail.com> wrote:\n> > I think I have much easier solution than this, will post that with update version patch set tomorrow.\n>\n> I don't know what you have in mind, but based on this discussion, it\n> seems to me that we should just have 5 states instead of 4:\n>\n> 1. WAL is permitted.\n> 2. WAL is being prohibited but some backends may not know about the change yet.\n> 3. WAL is prohibited.\n> 4. WAL is in the process of being permitted but XLogAcceptWrites() may\n> not have been called yet.\n> 5. WAL is in the process of being permitted and XLogAcceptWrites() has\n> been called but some backends may not know about the change yet.\n>\n> If we're in state #3 and someone does pg_prohibit_wal(false) then we\n> enter state #4. The checkpointer calls XLogAcceptWrites(), moves us to\n> state #5, and pushes out a barrier. Then it waits for the barrier to\n> be absorbed and, when it has been, it moves us to state #1. Then if\n> someone does pg_prohibit_wal(true) we move to state #2. The\n> checkpointer pushes out a barrier and waits for it to be absorbed.\n> Then it calls XLogFlush() and afterward moves us to state #3.\n>\n> We can have any (reasonable) number of states that we want. There's\n> nothing magical about 4.\n\nYour idea makes sense, but IMHO, if we are first writing\nXLogAcceptWrites() and then pushing out the barrier, then I don't\nunderstand the meaning of having state #4. I mean whenever any\nbackend receives the barrier the system will always be in state #5.\nSo what do we want to do with state #4?\n\nIs it just to make the state machine better? I mean in the checkpoint\nprocess, we don't need separate \"if checks\" whether the\nXLogAcceptWrites() is called or not, instead we can just rely on the\nstate, if it is #4 then we have to call XLogAcceptWrites(). If so\nthen I think it's okay to have an additional state, just wanted to\nknow what idea you had in mind?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 May 2021 11:08:53 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, May 12, 2021 at 11:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 11:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 11:17 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > I think I have much easier solution than this, will post that with update version patch set tomorrow.\n> >\n> > I don't know what you have in mind, but based on this discussion, it\n> > seems to me that we should just have 5 states instead of 4:\n> >\n\nI had to have two different ideas, the first one is a little bit\naligned with the approach you mentioned below but without introducing\na new state. Basically, what we want is to restrict any backend that\nconnects to the server and write a WAL record while we are doing\nXLogAcceptWrites(). For XLogAcceptWrites() skip we do already have a\nflag for that, when that flag is set (i.e. XLogAcceptWrites() skipped\npreviously) then treat the system as read-only (i.e. WAL prohibited)\nuntil XLogAcceptWrites() finishes. In that case, our IsWALProhibited()\nfunction will be:\n\nbool\nIsWALProhibited(void)\n{\n WALProhibitState cur_state;\n\n /*\n * If essential operations are needed to enable wal writes are skipped\n * previously then treat this state as WAL prohibited until that gets\n * done.\n */\n if (unlikely(GetXLogWriteAllowedState() == XLOG_ACCEPT_WRITES_SKIPPED))\n return true;\n\n cur_state = GetWALProhibitState(GetWALProhibitCounter());\n\n return (cur_state != WALPROHIBIT_STATE_READ_WRITE &&\n cur_state != WALPROHIBIT_STATE_GOING_READ_WRITE);\n}\n\nAnother idea that I want to propose & did the changes according to in\nthe attached version is making IsWALProhibited() something like this:\n\nbool\nIsWALProhibited(void)\n{\n /* Other than read-write state will be considered as read-only */\n return (GetWALProhibitState(GetWALProhibitCounter()) !=\n WALPROHIBIT_STATE_READ_WRITE);\n}\n\nBut this needs some additional changes to CompleteWALProhibitChange()\nfunction where the final in-memory system state update happens\ndifferently i.e. before or after emitting a global barrier.\n\nWhen in-memory WAL prohibited state is _GOING_READ_WRITE then\nin-memory state immediately changes to _READ_WRITE. After that global\nbarrier is emitted for other backends to change their local state.\nThis should be harmless because a _READ_WRITE system could have\n_READ_ONLY and _READ_WRITE backends.\n\nBut when the in-memory WAL prohibited state is _GOING_READ_ONLY then\nin-memory update for the final state setting is not going to happen\nbefore the global barrier. We cannot say the system is _READ_ONLY\nuntil we ensure that all backends are _READ_ONLY.\n\nFor more details please have a look at CompleteWALProhibitChange().\nNote that XLogAcceptWrites() happens before\nCompleteWALProhibitChange() so if any backend connect while\nXLogAcceptWrites() is in progress and will not allow WAL writes until it\ngets finished and CompleteWALProhibitChange() executed.\n\nThe second approach is much better, IMO, because IsWALProhibited() is\nmuch lighter which would run a number of times when a new backend\nconnects and/or its LocalXLogInsertAllowed cached value gets reset.\nPerhaps, you could argue that the number of calls might not be that\nmuch due to the locally cached value in LocalXLogInsertAllowed, but I\nam in favour of having less work.\n\nApart from this, I made a separate patch for XLogAcceptWrites()\nrefactoring. Now, each patch can be compiled without having the next\npatch on top of it.\n\n> > 1. WAL is permitted.\n> > 2. WAL is being prohibited but some backends may not know about the change yet.\n> > 3. WAL is prohibited.\n> > 4. WAL is in the process of being permitted but XLogAcceptWrites() may\n> > not have been called yet.\n> > 5. WAL is in the process of being permitted and XLogAcceptWrites() has\n> > been called but some backends may not know about the change yet.\n> >\n> > If we're in state #3 and someone does pg_prohibit_wal(false) then we\n> > enter state #4. The checkpointer calls XLogAcceptWrites(), moves us to\n> > state #5, and pushes out a barrier. Then it waits for the barrier to\n> > be absorbed and, when it has been, it moves us to state #1. Then if\n> > someone does pg_prohibit_wal(true) we move to state #2. The\n> > checkpointer pushes out a barrier and waits for it to be absorbed.\n> > Then it calls XLogFlush() and afterward moves us to state #3.\n> >\n> > We can have any (reasonable) number of states that we want. There's\n> > nothing magical about 4.\n>\n> Your idea makes sense, but IMHO, if we are first writing\n> XLogAcceptWrites() and then pushing out the barrier, then I don't\n> understand the meaning of having state #4. I mean whenever any\n> backend receives the barrier the system will always be in state #5.\n> So what do we want to do with state #4?\n>\n> Is it just to make the state machine better? I mean in the checkpoint\n> process, we don't need separate \"if checks\" whether the\n> XLogAcceptWrites() is called or not, instead we can just rely on the\n> state, if it is #4 then we have to call XLogAcceptWrites(). If so\n> then I think it's okay to have an additional state, just wanted to\n> know what idea you had in mind?\n>\nAFAICU, that proposed state #4 is to restrict the newly connected\nbackend from WAL writes. My first approach doing the same by changing\nIsWALProhibited() a bit.\n\nRegards,\nAmul",
"msg_date": "Wed, 12 May 2021 17:55:08 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, May 12, 2021 at 1:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Your idea makes sense, but IMHO, if we are first writing\n> XLogAcceptWrites() and then pushing out the barrier, then I don't\n> understand the meaning of having state #4. I mean whenever any\n> backend receives the barrier the system will always be in state #5.\n> So what do we want to do with state #4?\n\nWell, if you don't have that, how does the checkpointer know that it's\nsupposed to push out the barrier?\n\nYou and Amul both seem to want to merge states #4 and #5. But how to\nmake that work? Basically what you are both saying is that, after we\nmove into the \"going read-write\" state, backends aren't immediately\ntold that they can write WAL, but have to keep checking back. But this\ncould be expensive. If you have one state that means that the\ncheckpointer has been requested to run XLogAcceptWrites() and push out\na barrier, and another state to mean that it has done so, then you\navoid that. Maybe that overhead wouldn't be large anyway, but it seems\nlike it's only necessary because you're trying to merge two states\nwhich, from a logical point of view, are separate.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 May 2021 16:56:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, May 13, 2021 at 2:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 1:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Your idea makes sense, but IMHO, if we are first writing\n> > XLogAcceptWrites() and then pushing out the barrier, then I don't\n> > understand the meaning of having state #4. I mean whenever any\n> > backend receives the barrier the system will always be in state #5.\n> > So what do we want to do with state #4?\n>\n> Well, if you don't have that, how does the checkpointer know that it's\n> supposed to push out the barrier?\n>\n> You and Amul both seem to want to merge states #4 and #5. But how to\n> make that work? Basically what you are both saying is that, after we\n> move into the \"going read-write\" state, backends aren't immediately\n> told that they can write WAL, but have to keep checking back. But this\n> could be expensive. If you have one state that means that the\n> checkpointer has been requested to run XLogAcceptWrites() and push out\n> a barrier, and another state to mean that it has done so, then you\n> avoid that. Maybe that overhead wouldn't be large anyway, but it seems\n> like it's only necessary because you're trying to merge two states\n> which, from a logical point of view, are separate.\n\nI don't have an objection to having 5 states, just wanted to\nunderstand your reasoning. So it makes sense to me. Thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 May 2021 11:32:20 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, May 12, 2021 at 5:55 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n\nThanks for the updated patch, while going through I noticed this comment.\n\n+ /*\n+ * WAL prohibit state changes not allowed during recovery except the crash\n+ * recovery case.\n+ */\n+ PreventCommandDuringRecovery(\"pg_prohibit_wal()\");\n\nWhy do we need to allow state change during recovery? Do you still\nneed it after the latest changes you discussed here, I mean now\nXLogAcceptWrites() being called before sending barrier to backends.\nSo now we are not afraid that the backend will write WAL before we\ncall XLogAcceptWrites(). So now IMHO, we don't need to keep the\nsystem in recovery until pg_prohibit_wal(false) is called, right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 May 2021 12:36:28 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, May 13, 2021 at 12:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 5:55 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n>\n> Thanks for the updated patch, while going through I noticed this comment.\n>\n> + /*\n> + * WAL prohibit state changes not allowed during recovery except the crash\n> + * recovery case.\n> + */\n> + PreventCommandDuringRecovery(\"pg_prohibit_wal()\");\n>\n> Why do we need to allow state change during recovery? Do you still\n> need it after the latest changes you discussed here, I mean now\n> XLogAcceptWrites() being called before sending barrier to backends.\n> So now we are not afraid that the backend will write WAL before we\n> call XLogAcceptWrites(). So now IMHO, we don't need to keep the\n> system in recovery until pg_prohibit_wal(false) is called, right?\n>\n\nYour understanding is correct, and the previous patch also does the same, but\nthe code comment is wrong. Fixed in the attached version, also rebased for the\nlatest master head. Sorry for the confusion.\n\nRegards,\nAmul",
"msg_date": "Thu, 13 May 2021 14:54:04 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, May 13, 2021 at 2:54 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 12:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, May 12, 2021 at 5:55 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> >\n> > Thanks for the updated patch, while going through I noticed this comment.\n> >\n> > + /*\n> > + * WAL prohibit state changes not allowed during recovery except the crash\n> > + * recovery case.\n> > + */\n> > + PreventCommandDuringRecovery(\"pg_prohibit_wal()\");\n> >\n> > Why do we need to allow state change during recovery? Do you still\n> > need it after the latest changes you discussed here, I mean now\n> > XLogAcceptWrites() being called before sending barrier to backends.\n> > So now we are not afraid that the backend will write WAL before we\n> > call XLogAcceptWrites(). So now IMHO, we don't need to keep the\n> > system in recovery until pg_prohibit_wal(false) is called, right?\n> >\n>\n> Your understanding is correct, and the previous patch also does the same, but\n> the code comment is wrong. Fixed in the attached version, also rebased for the\n> latest master head. Sorry for the confusion.\n\nGreat thanks. I will review the remaining patch soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 May 2021 14:56:29 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, May 13, 2021 at 2:56 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Great thanks. I will review the remaining patch soon.\n\nI have reviewed v28-0003, and I have some comments on this.\n\n===\n@@ -126,9 +127,14 @@ XLogBeginInsert(void)\n Assert(mainrdata_last == (XLogRecData *) &mainrdata_head);\n Assert(mainrdata_len == 0);\n\n+ /*\n+ * WAL permission must have checked before entering the critical section.\n+ * Otherwise, WAL prohibited error will force system panic.\n+ */\n+ Assert(walpermit_checked_state != WALPERMIT_UNCHECKED ||\n!CritSectionCount);\n+\n /* cross-check on whether we should be here or not */\n- if (!XLogInsertAllowed())\n- elog(ERROR, \"cannot make new WAL entries during recovery\");\n+ CheckWALPermitted();\n\nWe must not call CheckWALPermitted inside the critical section,\ninstead if we are here we must be sure that\nWAL is permitted, so better put an assert. Even if that is ensured by\nsome other mean then also I don't\nsee any reason for calling this error generating function.\n\n===\n\n+CheckWALPermitted(void)\n+{\n+ if (!XLogInsertAllowed())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),\n+ errmsg(\"system is now read only\")));\n+\n\nsystem is now read only -> wal is prohibited (in error message)\n\n===\n\n- * We can't write WAL in recovery mode, so there's no point trying to\n+ * We can't write WAL during read-only mode, so there's no point trying to\n\nduring read-only mode -> if WAL is prohibited or WAL recovery in\nprogress (add recovery in progress and also modify read-only to wal\nprohibited)\n\n===\n\n+ if (!XLogInsertAllowed())\n {\n GUC_check_errcode(ERRCODE_FEATURE_NOT_SUPPORTED);\n- GUC_check_errmsg(\"cannot set transaction read-write mode\nduring recovery\");\n+ GUC_check_errmsg(\"cannot set transaction read-write mode\nwhile system is read only\");\n return false;\n }\n\nsystem is read only -> WAL is prohibited\n\n===\n\nI think that's all, I have to say about 0003.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 15 May 2021 15:12:15 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Sat, May 15, 2021 at 3:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 2:56 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > Great thanks. I will review the remaining patch soon.\n>\n> I have reviewed v28-0003, and I have some comments on this.\n>\n> ===\n> @@ -126,9 +127,14 @@ XLogBeginInsert(void)\n> Assert(mainrdata_last == (XLogRecData *) &mainrdata_head);\n> Assert(mainrdata_len == 0);\n>\n> + /*\n> + * WAL permission must have checked before entering the critical section.\n> + * Otherwise, WAL prohibited error will force system panic.\n> + */\n> + Assert(walpermit_checked_state != WALPERMIT_UNCHECKED ||\n> !CritSectionCount);\n> +\n> /* cross-check on whether we should be here or not */\n> - if (!XLogInsertAllowed())\n> - elog(ERROR, \"cannot make new WAL entries during recovery\");\n> + CheckWALPermitted();\n>\n> We must not call CheckWALPermitted inside the critical section,\n> instead if we are here we must be sure that\n> WAL is permitted, so better put an assert. Even if that is ensured by\n> some other mean then also I don't\n> see any reason for calling this error generating function.\n>\n\nI understand that we should not have an error inside a critical section but\nthis check is not wrong. Patch has enough checking so that errors due to WAL\nprohibited state must not hit in the critical section, see assert just before\nCheckWALPermitted(). Before entering into the critical section, we do have an\nexplicit WAL prohibited check. And to make sure that check has been done for\nall current critical section for the wal writes, we have aforesaid assert\nchecking, for more detail on this please have a look at the \"WAL prohibited\nsystem state\" section of src/backend/access/transam/README added in 0004 patch.\nThis assertion also ensures that future development does not miss the WAL\nprohibited state check before entering into a newly added critical section for\nWAL writes.\n\n> ===\n>\n> +CheckWALPermitted(void)\n> +{\n> + if (!XLogInsertAllowed())\n> + ereport(ERROR,\n> + (errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),\n> + errmsg(\"system is now read only\")));\n> +\n>\n> system is now read only -> wal is prohibited (in error message)\n>\n> ===\n>\n> - * We can't write WAL in recovery mode, so there's no point trying to\n> + * We can't write WAL during read-only mode, so there's no point trying to\n>\n> during read-only mode -> if WAL is prohibited or WAL recovery in\n> progress (add recovery in progress and also modify read-only to wal\n> prohibited)\n>\n> ===\n>\n> + if (!XLogInsertAllowed())\n> {\n> GUC_check_errcode(ERRCODE_FEATURE_NOT_SUPPORTED);\n> - GUC_check_errmsg(\"cannot set transaction read-write mode\n> during recovery\");\n> + GUC_check_errmsg(\"cannot set transaction read-write mode\n> while system is read only\");\n> return false;\n> }\n>\n> system is read only -> WAL is prohibited\n>\n> ===\n\nFixed all in the attached version.\n\n>\n> I think that's all, I have to say about 0003.\n>\n\nThanks for the review.\n\nRegards,\nAmul",
"msg_date": "Mon, 17 May 2021 11:47:28 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, May 17, 2021 at 11:48 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Sat, May 15, 2021 at 3:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 2:56 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > Great thanks. I will review the remaining patch soon.\n> >\n> > I have reviewed v28-0003, and I have some comments on this.\n> >\n> > ===\n> > @@ -126,9 +127,14 @@ XLogBeginInsert(void)\n> > Assert(mainrdata_last == (XLogRecData *) &mainrdata_head);\n> > Assert(mainrdata_len == 0);\n> >\n> > + /*\n> > + * WAL permission must have checked before entering the critical section.\n> > + * Otherwise, WAL prohibited error will force system panic.\n> > + */\n> > + Assert(walpermit_checked_state != WALPERMIT_UNCHECKED ||\n> > !CritSectionCount);\n> > +\n> > /* cross-check on whether we should be here or not */\n> > - if (!XLogInsertAllowed())\n> > - elog(ERROR, \"cannot make new WAL entries during recovery\");\n> > + CheckWALPermitted();\n> >\n> > We must not call CheckWALPermitted inside the critical section,\n> > instead if we are here we must be sure that\n> > WAL is permitted, so better put an assert. Even if that is ensured by\n> > some other mean then also I don't\n> > see any reason for calling this error generating function.\n> >\n>\n> I understand that we should not have an error inside a critical section but\n> this check is not wrong. Patch has enough checking so that errors due to WAL\n> prohibited state must not hit in the critical section, see assert just before\n> CheckWALPermitted(). Before entering into the critical section, we do have an\n> explicit WAL prohibited check. And to make sure that check has been done for\n> all current critical section for the wal writes, we have aforesaid assert\n> checking, for more detail on this please have a look at the \"WAL prohibited\n> system state\" section of src/backend/access/transam/README added in 0004 patch.\n> This assertion also ensures that future development does not miss the WAL\n> prohibited state check before entering into a newly added critical section for\n> WAL writes.\n\nI think we need CheckWALPermitted(); check, in XLogBeginInsert()\nfunction because if XLogBeginInsert() maybe called outside critical\nsection e.g. pg_truncate_visibility_map() then we should error out.\nSo this check make sense to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 May 2021 13:06:58 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is rebase for the latest master head. Also, I added one more\nrefactoring code that deduplicates the code setting database state in the\ncontrol file. The same code set the database state is also needed for this\nfeature.\n\nRegards.\nAmul\n\nOn Mon, May 17, 2021 at 1:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 11:48 AM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Sat, May 15, 2021 at 3:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Thu, May 13, 2021 at 2:56 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > Great thanks. I will review the remaining patch soon.\n> > >\n> > > I have reviewed v28-0003, and I have some comments on this.\n> > >\n> > > ===\n> > > @@ -126,9 +127,14 @@ XLogBeginInsert(void)\n> > > Assert(mainrdata_last == (XLogRecData *) &mainrdata_head);\n> > > Assert(mainrdata_len == 0);\n> > >\n> > > + /*\n> > > + * WAL permission must have checked before entering the critical section.\n> > > + * Otherwise, WAL prohibited error will force system panic.\n> > > + */\n> > > + Assert(walpermit_checked_state != WALPERMIT_UNCHECKED ||\n> > > !CritSectionCount);\n> > > +\n> > > /* cross-check on whether we should be here or not */\n> > > - if (!XLogInsertAllowed())\n> > > - elog(ERROR, \"cannot make new WAL entries during recovery\");\n> > > + CheckWALPermitted();\n> > >\n> > > We must not call CheckWALPermitted inside the critical section,\n> > > instead if we are here we must be sure that\n> > > WAL is permitted, so better put an assert. Even if that is ensured by\n> > > some other mean then also I don't\n> > > see any reason for calling this error generating function.\n> > >\n> >\n> > I understand that we should not have an error inside a critical section but\n> > this check is not wrong. Patch has enough checking so that errors due to WAL\n> > prohibited state must not hit in the critical section, see assert just before\n> > CheckWALPermitted(). Before entering into the critical section, we do have an\n> > explicit WAL prohibited check. And to make sure that check has been done for\n> > all current critical section for the wal writes, we have aforesaid assert\n> > checking, for more detail on this please have a look at the \"WAL prohibited\n> > system state\" section of src/backend/access/transam/README added in 0004 patch.\n> > This assertion also ensures that future development does not miss the WAL\n> > prohibited state check before entering into a newly added critical section for\n> > WAL writes.\n>\n> I think we need CheckWALPermitted(); check, in XLogBeginInsert()\n> function because if XLogBeginInsert() maybe called outside critical\n> section e.g. pg_truncate_visibility_map() then we should error out.\n> So this check make sense to me.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 17 Jun 2021 10:52:27 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 1:23 AM Amul Sul <sulamul@gmail.com> wrote:\n> Attached is rebase for the latest master head. Also, I added one more\n> refactoring code that deduplicates the code setting database state in the\n> control file. The same code set the database state is also needed for this\n> feature.\n\nI started studying 0001 today and found that it rearranged the order\nof operations in StartupXLOG() more than I was expecting. It does, as\nper previous discussions, move a bunch of things to the place where we\nnow call XLogParamters(). But, unsatisfyingly, InRecovery = false and\nXLogReaderFree() then have to move down even further. Since the goal\nhere is to get to a situation where we sometimes XLogAcceptWrites()\nafter InRecovery = false, it didn't seem nice for this refactoring\npatch to still end up with a situation where this stuff happens while\nInRecovery = true. In fact, with the patch, the amount of code that\nruns with InRecovery = true actually *increases*, which is not what I\nthink should be happening here. That's why the patch ends up having to\nadjust SetMultiXactIdLimit to not Assert(!InRecovery).\n\nAnd then I started to wonder how this was ever going to work as part\nof the larger patch set, because as you have it here,\nXLogAcceptWrites() takes arguments XLogReaderState *xlogreader,\nXLogRecPtr EndOfLog, and TimeLineID EndOfLogTLI and if the\ncheckpointer is calling that at a later time after the user issues\npg_prohibit_wal(false), it's going to have none of those things. So I\nhad a quick look at that part of the code and found this in\ncheckpointer.c:\n\nXLogAcceptWrites(true, NULL, InvalidXLogRecPtr, 0);\n\nFor those following along from home, the additional \"true\" is a bool\nneedChkpt argument added to XLogAcceptWrites() by 0003. Well, none of\nthis is very satisfying. The whole purpose of passing the xlogreader\nis so we can figure out whether we need a checkpoint (never mind the\nquestion of whether the existing algorithm for determining that is\nreally sensible) but now we need a second argument that basically\nserves the same purpose since one of the two callers to this function\nwon't have an xlogreader. And then we're passing the EndOfLog and\nEndOfLogTLI as dummy values which seems like it's probably just\ntotally wrong, but if for some reason it works correctly there sure\ndon't seem to be any comments explaining why.\n\nSo I started doing a bit of hacking myself and ended up with the\nattached, which I think is not completely the right thing yet but I\nthink it's better than your version. I split this into three parts.\n0001 splits up the logic that currently decides whether to write an\nend-of-recovery record or a checkpoint record and if the latter how\nthe checkpoint ought to be performed into two functions.\nDetermineRecoveryXlogAction() figures out what we want to do, and\nPerformRecoveryXlogAction() does it. It also moves the code to run\nrecovery_end_command and related stuff into a new function\nCleanupAfterArchiveRecovery(). 0002 then builds on this by postponing\nUpdateFullPageWrites(), PerformRecoveryXLogAction(), and\nCleanupAfterArchiveRecovery() to just before we\nXLogReportParameters(). Because of the refactoring done by 0001, this\nis only a small amount of code movement. Because of the separation\nbetween DetermineRecoveryXlogAction() and PerformRecoveryXlogAction(),\nthe latter doesn't need the xlogreader. So we can do\nDetermineRecoveryXlogAction() at the same time as now, while the\nxlogreader is available, and then we don't need it later when we\nPerformRecoveryXlogAction(), because we already know what we need to\nknow. I think this is all fine as far as it goes.\n\nMy 0003 is where I see some lingering problems. It creates\nXLogAcceptWrites(), moves the appropriate stuff there, and doesn't\nneed the xlogreader. But it doesn't really solve the problem of how\ncheckpointer.c would be able to call this function with proper\narguments. It is at least better in not needing two arguments to\ndecide what to do, but how is checkpointer.c supposed to know what to\npass for xlogaction? Worse yet, how is checkpointer.c supposed to know\nwhat to pass for EndOfLogTLI and EndOfLog? Actually, EndOfLog doesn't\nseem too problematic, because that value has been stored in four (!)\nplaces inside XLogCtl by this code:\n\n LogwrtResult.Write = LogwrtResult.Flush = EndOfLog;\n\n XLogCtl->LogwrtResult = LogwrtResult;\n\n XLogCtl->LogwrtRqst.Write = EndOfLog;\n XLogCtl->LogwrtRqst.Flush = EndOfLog;\n\nPresumably we could relatively easily change things around so that we\nfinish one of those values ... probably one of the \"write\" values ..\nback out of XLogCtl instead of passing it as a parameter. That would\nwork just as well from the checkpointer as from the startup process,\nand there seems to be no way for the value to change until after\nXLogAcceptWrites() has been called, so it seems fine. But that doesn't\nhelp for the other arguments. What I'm thinking is that we should just\narrange to store EndOfLogTLI and xlogaction into XLogCtl also, and\nthen XLogAcceptWrites() can fish those values out of there as well,\nwhich should be enough to make it work and do the same thing\nregardless of which process is calling it. But I have run out of time\nfor today so have not explored coding that up.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 23 Jul 2021 16:03:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 4:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> My 0003 is where I see some lingering problems. It creates\n> XLogAcceptWrites(), moves the appropriate stuff there, and doesn't\n> need the xlogreader. But it doesn't really solve the problem of how\n> checkpointer.c would be able to call this function with proper\n> arguments. It is at least better in not needing two arguments to\n> decide what to do, but how is checkpointer.c supposed to know what to\n> pass for xlogaction? Worse yet, how is checkpointer.c supposed to know\n> what to pass for EndOfLogTLI and EndOfLog?\n\nOn further study, I found another problem: the way my patch set leaves\nthings, XLogAcceptWrites() depends on ArchiveRecoveryRequested, which\nwill not be correctly initialized in any process other than the\nstartup process. So CleanupAfterArchiveRecovery(EndOfLogTLI, EndOfLog)\nwould just be skipped. Your 0001 seems to have the same problem. You\nadded Assert(AmStartupProcess()) to the inside of the if\n(ArchiveRecoveryRequested) block, but that doesn't fix anything.\nOutside the startup process, ArchiveRecoveryRequested will always be\nfalse, but the point is that the associated stuff should be done if\nArchiveRecoveryRequested would have been true in the startup process.\nBoth of our patch sets leave things in a state where that would never\nhappen, which is not good. Unless I'm missing something, it seems like\nmaybe you didn't test your patches to verify that, when the\nXLogAcceptWrites() call comes from the checkpointer, all the same\nthings happen that would have happened had it been called from the\nstartup process. That would be a really good thing to have tested\nbefore posting your patches.\n\nAs far as EndOfLogTLI is concerned, there are, somewhat annoyingly,\nseveral TLIs stored in XLogCtl. None of them seem to be precisely the\nsame thing as EndLogTLI, but I am hoping that replayEndTLI is close\nenough. I found out pretty quickly through testing that replayEndTLI\nisn't always valid -- it ends up 0 if we don't enter recovery. That's\nnot really a problem, though, because we only need it to be valid if\nArchiveRecoveryRequested. The code that initializes and updates it\nseems to run whenever InRecovery = true, and ArchiveRecoveryRequested\n= true will force InRecovery = true. So it looks to me like\nreplayEndTLI will always be initialized in the cases where we need a\nvalue. It's not yet entirely clear to me if it has to have the same\nvalue as EndOfLogTLI. I find this code comment quite mysterious:\n\n /*\n * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n * the end-of-log. It could be different from the timeline that EndOfLog\n * nominally belongs to, if there was a timeline switch in that segment,\n * and we were reading the old WAL from a segment belonging to a higher\n * timeline.\n */\n EndOfLogTLI = xlogreader->seg.ws_tli;\n\nThe thing is, if we were reading old WAL from a segment belonging to a\nhigher timeline, wouldn't we have switched to that new timeline?\nSuppose we want WAL segment 246 from TLI 1, but we don't have that\nsegment on TLI 1, only TLI 2. Well, as far as I know, for us to use\nthe TLI 2 version, we'd need to have TLI 2 in the history of the\nrecovery_target_timeline. And if that is the case, then we would have\nto replay through the record where the timeline changes. And if we do\nthat, then the discrepancy postulated by the comment cannot still\nexist by the time we reach this code, because this code is only\nreached after we finish WAL redo. So I'm baffled as to how this can\nhappen, but considering how many cases there are in this code, I sure\ncan't promise that it doesn't. The fact that we have few tests for any\nof this doesn't help either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Jul 2021 16:55:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 2:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jul 23, 2021 at 4:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > My 0003 is where I see some lingering problems. It creates\n> > XLogAcceptWrites(), moves the appropriate stuff there, and doesn't\n> > need the xlogreader. But it doesn't really solve the problem of how\n> > checkpointer.c would be able to call this function with proper\n> > arguments. It is at least better in not needing two arguments to\n> > decide what to do, but how is checkpointer.c supposed to know what to\n> > pass for xlogaction? Worse yet, how is checkpointer.c supposed to know\n> > what to pass for EndOfLogTLI and EndOfLog?\n>\n> On further study, I found another problem: the way my patch set leaves\n> things, XLogAcceptWrites() depends on ArchiveRecoveryRequested, which\n> will not be correctly initialized in any process other than the\n> startup process. So CleanupAfterArchiveRecovery(EndOfLogTLI, EndOfLog)\n> would just be skipped. Your 0001 seems to have the same problem. You\n> added Assert(AmStartupProcess()) to the inside of the if\n> (ArchiveRecoveryRequested) block, but that doesn't fix anything.\n> Outside the startup process, ArchiveRecoveryRequested will always be\n> false, but the point is that the associated stuff should be done if\n> ArchiveRecoveryRequested would have been true in the startup process.\n> Both of our patch sets leave things in a state where that would never\n> happen, which is not good. Unless I'm missing something, it seems like\n> maybe you didn't test your patches to verify that, when the\n> XLogAcceptWrites() call comes from the checkpointer, all the same\n> things happen that would have happened had it been called from the\n> startup process. That would be a really good thing to have tested\n> before posting your patches.\n>\n\nMy bad, I am extremely sorry about that. I usually do test my patches,\nbut somehow I failed to test this change due to manually testing the\nwhole ASRO feature and hurrying in posting the newest version.\n\nI will try to be more careful next time.\n\n> As far as EndOfLogTLI is concerned, there are, somewhat annoyingly,\n> several TLIs stored in XLogCtl. None of them seem to be precisely the\n> same thing as EndLogTLI, but I am hoping that replayEndTLI is close\n> enough. I found out pretty quickly through testing that replayEndTLI\n> isn't always valid -- it ends up 0 if we don't enter recovery. That's\n> not really a problem, though, because we only need it to be valid if\n> ArchiveRecoveryRequested. The code that initializes and updates it\n> seems to run whenever InRecovery = true, and ArchiveRecoveryRequested\n> = true will force InRecovery = true. So it looks to me like\n> replayEndTLI will always be initialized in the cases where we need a\n> value. It's not yet entirely clear to me if it has to have the same\n> value as EndOfLogTLI. I find this code comment quite mysterious:\n>\n> /*\n> * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n> * the end-of-log. It could be different from the timeline that EndOfLog\n> * nominally belongs to, if there was a timeline switch in that segment,\n> * and we were reading the old WAL from a segment belonging to a higher\n> * timeline.\n> */\n> EndOfLogTLI = xlogreader->seg.ws_tli;\n>\n> The thing is, if we were reading old WAL from a segment belonging to a\n> higher timeline, wouldn't we have switched to that new timeline?\n\nAFAIUC, by browsing the code, yes, we are switching to the new\ntimeline. Along with lastReplayedTLI, lastReplayedEndRecPtr is also\nthe same as the EndOfLog that we needed when ArchiveRecoveryRequested\nis true.\n\nI went through the original commit 7cbee7c0a1db and the thread[1] but\ndidn't find any related discussion for that.\n\n> Suppose we want WAL segment 246 from TLI 1, but we don't have that\n> segment on TLI 1, only TLI 2. Well, as far as I know, for us to use\n> the TLI 2 version, we'd need to have TLI 2 in the history of the\n> recovery_target_timeline. And if that is the case, then we would have\n> to replay through the record where the timeline changes. And if we do\n> that, then the discrepancy postulated by the comment cannot still\n> exist by the time we reach this code, because this code is only\n> reached after we finish WAL redo. So I'm baffled as to how this can\n> happen, but considering how many cases there are in this code, I sure\n> can't promise that it doesn't. The fact that we have few tests for any\n> of this doesn't help either.\n\nI am not an expert in this area, but will try to spend some more time\non understanding and testing.\n\n1] postgr.es/m/555DD101.7080209@iki.fi\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 28 Jul 2021 16:37:01 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 4:37 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Jul 28, 2021 at 2:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Jul 23, 2021 at 4:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > My 0003 is where I see some lingering problems. It creates\n> > > XLogAcceptWrites(), moves the appropriate stuff there, and doesn't\n> > > need the xlogreader. But it doesn't really solve the problem of how\n> > > checkpointer.c would be able to call this function with proper\n> > > arguments. It is at least better in not needing two arguments to\n> > > decide what to do, but how is checkpointer.c supposed to know what to\n> > > pass for xlogaction? Worse yet, how is checkpointer.c supposed to know\n> > > what to pass for EndOfLogTLI and EndOfLog?\n> >\n> > On further study, I found another problem: the way my patch set leaves\n> > things, XLogAcceptWrites() depends on ArchiveRecoveryRequested, which\n> > will not be correctly initialized in any process other than the\n> > startup process. So CleanupAfterArchiveRecovery(EndOfLogTLI, EndOfLog)\n> > would just be skipped. Your 0001 seems to have the same problem. You\n> > added Assert(AmStartupProcess()) to the inside of the if\n> > (ArchiveRecoveryRequested) block, but that doesn't fix anything.\n> > Outside the startup process, ArchiveRecoveryRequested will always be\n> > false, but the point is that the associated stuff should be done if\n> > ArchiveRecoveryRequested would have been true in the startup process.\n> > Both of our patch sets leave things in a state where that would never\n> > happen, which is not good. Unless I'm missing something, it seems like\n> > maybe you didn't test your patches to verify that, when the\n> > XLogAcceptWrites() call comes from the checkpointer, all the same\n> > things happen that would have happened had it been called from the\n> > startup process. That would be a really good thing to have tested\n> > before posting your patches.\n> >\n>\n> My bad, I am extremely sorry about that. I usually do test my patches,\n> but somehow I failed to test this change due to manually testing the\n> whole ASRO feature and hurrying in posting the newest version.\n>\n> I will try to be more careful next time.\n>\n\nI was too worried about how I could miss that & after thinking more\nabout that, I realized that the operation for ArchiveRecoveryRequested\nis never going to be skipped in the startup process and that never\nleft for the checkpoint process to do that later. That is the reason\nthat assert was added there.\n\nWhen ArchiveRecoveryRequested, the server will no longer be in\nthe wal prohibited mode, we implicitly change the state to\nwal-permitted. Here is the snip from the 0003 patch:\n\n@@ -6614,13 +6629,30 @@ StartupXLOG(void)\n (errmsg(\"starting archive recovery\")));\n }\n\n- /*\n- * Take ownership of the wakeup latch if we're going to sleep during\n- * recovery.\n- */\n if (ArchiveRecoveryRequested)\n+ {\n+ /*\n+ * Take ownership of the wakeup latch if we're going to sleep during\n+ * recovery.\n+ */\n OwnLatch(&XLogCtl->recoveryWakeupLatch);\n\n+ /*\n+ * Since archive recovery is requested, we cannot be in a wal prohibited\n+ * state.\n+ */\n+ if (ControlFile->wal_prohibited)\n+ {\n+ /* No need to hold ControlFileLock yet, we aren't up far enough */\n+ ControlFile->wal_prohibited = false;\n+ ControlFile->time = (pg_time_t) time(NULL);\n+ UpdateControlFile();\n+\n+ ereport(LOG,\n+ (errmsg(\"clearing WAL prohibition because the system is in archive\nrecovery\")));\n+ }\n+ }\n+\n\n\n> > As far as EndOfLogTLI is concerned, there are, somewhat annoyingly,\n> > several TLIs stored in XLogCtl. None of them seem to be precisely the\n> > same thing as EndLogTLI, but I am hoping that replayEndTLI is close\n> > enough. I found out pretty quickly through testing that replayEndTLI\n> > isn't always valid -- it ends up 0 if we don't enter recovery. That's\n> > not really a problem, though, because we only need it to be valid if\n> > ArchiveRecoveryRequested. The code that initializes and updates it\n> > seems to run whenever InRecovery = true, and ArchiveRecoveryRequested\n> > = true will force InRecovery = true. So it looks to me like\n> > replayEndTLI will always be initialized in the cases where we need a\n> > value. It's not yet entirely clear to me if it has to have the same\n> > value as EndOfLogTLI. I find this code comment quite mysterious:\n> >\n> > /*\n> > * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n> > * the end-of-log. It could be different from the timeline that EndOfLog\n> > * nominally belongs to, if there was a timeline switch in that segment,\n> > * and we were reading the old WAL from a segment belonging to a higher\n> > * timeline.\n> > */\n> > EndOfLogTLI = xlogreader->seg.ws_tli;\n> >\n> > The thing is, if we were reading old WAL from a segment belonging to a\n> > higher timeline, wouldn't we have switched to that new timeline?\n>\n> AFAIUC, by browsing the code, yes, we are switching to the new\n> timeline. Along with lastReplayedTLI, lastReplayedEndRecPtr is also\n> the same as the EndOfLog that we needed when ArchiveRecoveryRequested\n> is true.\n>\n> I went through the original commit 7cbee7c0a1db and the thread[1] but\n> didn't find any related discussion for that.\n>\n> > Suppose we want WAL segment 246 from TLI 1, but we don't have that\n> > segment on TLI 1, only TLI 2. Well, as far as I know, for us to use\n> > the TLI 2 version, we'd need to have TLI 2 in the history of the\n> > recovery_target_timeline. And if that is the case, then we would have\n> > to replay through the record where the timeline changes. And if we do\n> > that, then the discrepancy postulated by the comment cannot still\n> > exist by the time we reach this code, because this code is only\n> > reached after we finish WAL redo. So I'm baffled as to how this can\n> > happen, but considering how many cases there are in this code, I sure\n> > can't promise that it doesn't. The fact that we have few tests for any\n> > of this doesn't help either.\n>\n> I am not an expert in this area, but will try to spend some more time\n> on understanding and testing.\n>\n> 1] postgr.es/m/555DD101.7080209@iki.fi\n>\n> Regards,\n> Amul\n\n\n",
"msg_date": "Wed, 28 Jul 2021 17:02:33 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 5:03 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> I was too worried about how I could miss that & after thinking more\n> about that, I realized that the operation for ArchiveRecoveryRequested\n> is never going to be skipped in the startup process and that never\n> left for the checkpoint process to do that later. That is the reason\n> that assert was added there.\n>\n> When ArchiveRecoveryRequested, the server will no longer be in\n> the wal prohibited mode, we implicitly change the state to\n> wal-permitted. Here is the snip from the 0003 patch:\n>\n> @@ -6614,13 +6629,30 @@ StartupXLOG(void)\n> (errmsg(\"starting archive recovery\")));\n> }\n>\n> - /*\n> - * Take ownership of the wakeup latch if we're going to sleep during\n> - * recovery.\n> - */\n> if (ArchiveRecoveryRequested)\n> + {\n> + /*\n> + * Take ownership of the wakeup latch if we're going to sleep during\n> + * recovery.\n> + */\n> OwnLatch(&XLogCtl->recoveryWakeupLatch);\n>\n> + /*\n> + * Since archive recovery is requested, we cannot be in a wal prohibited\n> + * state.\n> + */\n> + if (ControlFile->wal_prohibited)\n> + {\n> + /* No need to hold ControlFileLock yet, we aren't up far enough */\n> + ControlFile->wal_prohibited = false;\n> + ControlFile->time = (pg_time_t) time(NULL);\n> + UpdateControlFile();\n> +\n\nIs there some reason why we are forcing 'wal_prohibited' to off if we\nare doing archive recovery? It might have already been discussed, but\nI could not find it on a quick look into the thread.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Jul 2021 16:47:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 4:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jul 28, 2021 at 5:03 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > I was too worried about how I could miss that & after thinking more\n> > about that, I realized that the operation for ArchiveRecoveryRequested\n> > is never going to be skipped in the startup process and that never\n> > left for the checkpoint process to do that later. That is the reason\n> > that assert was added there.\n> >\n> > When ArchiveRecoveryRequested, the server will no longer be in\n> > the wal prohibited mode, we implicitly change the state to\n> > wal-permitted. Here is the snip from the 0003 patch:\n> >\n> > @@ -6614,13 +6629,30 @@ StartupXLOG(void)\n> > (errmsg(\"starting archive recovery\")));\n> > }\n> >\n> > - /*\n> > - * Take ownership of the wakeup latch if we're going to sleep during\n> > - * recovery.\n> > - */\n> > if (ArchiveRecoveryRequested)\n> > + {\n> > + /*\n> > + * Take ownership of the wakeup latch if we're going to sleep during\n> > + * recovery.\n> > + */\n> > OwnLatch(&XLogCtl->recoveryWakeupLatch);\n> >\n> > + /*\n> > + * Since archive recovery is requested, we cannot be in a wal prohibited\n> > + * state.\n> > + */\n> > + if (ControlFile->wal_prohibited)\n> > + {\n> > + /* No need to hold ControlFileLock yet, we aren't up far enough */\n> > + ControlFile->wal_prohibited = false;\n> > + ControlFile->time = (pg_time_t) time(NULL);\n> > + UpdateControlFile();\n> > +\n>\n> Is there some reason why we are forcing 'wal_prohibited' to off if we\n> are doing archive recovery? It might have already been discussed, but\n> I could not find it on a quick look into the thread.\n>\n\nHere is: https://postgr.es/m/CA+TgmoZ=CCTbAXxMTYZoGXEgqzOz9smkBWrDpsacpjvFcGCuaw@mail.gmail.com\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 29 Jul 2021 17:22:21 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 7:33 AM Amul Sul <sulamul@gmail.com> wrote:\n> I was too worried about how I could miss that & after thinking more\n> about that, I realized that the operation for ArchiveRecoveryRequested\n> is never going to be skipped in the startup process and that never\n> left for the checkpoint process to do that later. That is the reason\n> that assert was added there.\n>\n> When ArchiveRecoveryRequested, the server will no longer be in\n> the wal prohibited mode, we implicitly change the state to\n> wal-permitted. Here is the snip from the 0003 patch:\n\nUgh, OK. That makes sense, but I'm still not sure that I like it. I've\nkind of been wondering: why not have XLogAcceptWrites() be the\nresponsibility of the checkpointer all the time, in every case? That\nwould require fixing some more things, and this is one of them, but\nthen it would be consistent, which means that any bugs would be likely\nto get found and fixed. If calling XLogAcceptWrites() from the\ncheckpointer is some funny case that only happens when the system\ncrashes while WAL is prohibited, then we might fail to notice that we\nhave a bug.\n\nThis is especially true given that we have very little test coverage\nin this area. Andres was ranting to me about this earlier this week,\nand I wasn't sure he was right, but then I noticed that we have\nexactly zero tests in the entire source tree that make use of\nrecovery_end_command. We really need a TAP test for that, I think.\nIt's too scary to do much reorganization of the code without having\nany tests at all for the stuff we're moving around. Likewise, we're\ngoing to need TAP tests for the stuff that is specific to this patch.\nFor example, we should have a test that crashes the server while it's\nread only, brings it back up, checks that we still can't write WAL,\nthen re-enables WAL, and checks that we now can write WAL. There are\nprobably a bunch of other things that we should test, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:16:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jul 29, 2021 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Jul 28, 2021 at 7:33 AM Amul Sul <sulamul@gmail.com> wrote:\n> > I was too worried about how I could miss that & after thinking more\n> > about that, I realized that the operation for ArchiveRecoveryRequested\n> > is never going to be skipped in the startup process and that never\n> > left for the checkpoint process to do that later. That is the reason\n> > that assert was added there.\n> >\n> > When ArchiveRecoveryRequested, the server will no longer be in\n> > the wal prohibited mode, we implicitly change the state to\n> > wal-permitted. Here is the snip from the 0003 patch:\n>\n> Ugh, OK. That makes sense, but I'm still not sure that I like it. I've\n> kind of been wondering: why not have XLogAcceptWrites() be the\n> responsibility of the checkpointer all the time, in every case? That\n> would require fixing some more things, and this is one of them, but\n> then it would be consistent, which means that any bugs would be likely\n> to get found and fixed. If calling XLogAcceptWrites() from the\n> checkpointer is some funny case that only happens when the system\n> crashes while WAL is prohibited, then we might fail to notice that we\n> have a bug.\n>\n> This is especially true given that we have very little test coverage\n> in this area. Andres was ranting to me about this earlier this week,\n> and I wasn't sure he was right, but then I noticed that we have\n> exactly zero tests in the entire source tree that make use of\n> recovery_end_command. We really need a TAP test for that, I think.\n> It's too scary to do much reorganization of the code without having\n> any tests at all for the stuff we're moving around. Likewise, we're\n> going to need TAP tests for the stuff that is specific to this patch.\n> For example, we should have a test that crashes the server while it's\n> read only, brings it back up, checks that we still can't write WAL,\n> then re-enables WAL, and checks that we now can write WAL. There are\n> probably a bunch of other things that we should test, too.\n>\n\nHi,\n\nI have been testing “ALTER SYSTEM READ ONLY” and wrote a few tap test cases\nfor this feature.\nPlease find the test case(Draft version) attached herewith, to be applied\non top of the v30 patch by Amul.\nKindly have a review and let me know the required changes.\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 30 Jul 2021 15:07:37 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is the rebase version on top of the latest master head\nincludes refactoring patches posted by Robert.\n\nOn Thu, Jul 29, 2021 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 28, 2021 at 7:33 AM Amul Sul <sulamul@gmail.com> wrote:\n> > I was too worried about how I could miss that & after thinking more\n> > about that, I realized that the operation for ArchiveRecoveryRequested\n> > is never going to be skipped in the startup process and that never\n> > left for the checkpoint process to do that later. That is the reason\n> > that assert was added there.\n> >\n> > When ArchiveRecoveryRequested, the server will no longer be in\n> > the wal prohibited mode, we implicitly change the state to\n> > wal-permitted. Here is the snip from the 0003 patch:\n>\n> Ugh, OK. That makes sense, but I'm still not sure that I like it. I've\n> kind of been wondering: why not have XLogAcceptWrites() be the\n> responsibility of the checkpointer all the time, in every case? That\n> would require fixing some more things, and this is one of them, but\n> then it would be consistent, which means that any bugs would be likely\n> to get found and fixed. If calling XLogAcceptWrites() from the\n> checkpointer is some funny case that only happens when the system\n> crashes while WAL is prohibited, then we might fail to notice that we\n> have a bug.\n>\n\nUnfortunately, I didn't get much time to think about this and don't\nhave a strong opinion on it either.\n\n> This is especially true given that we have very little test coverage\n> in this area. Andres was ranting to me about this earlier this week,\n> and I wasn't sure he was right, but then I noticed that we have\n> exactly zero tests in the entire source tree that make use of\n> recovery_end_command. We really need a TAP test for that, I think.\n> It's too scary to do much reorganization of the code without having\n> any tests at all for the stuff we're moving around. Likewise, we're\n> going to need TAP tests for the stuff that is specific to this patch.\n> For example, we should have a test that crashes the server while it's\n> read only, brings it back up, checks that we still can't write WAL,\n> then re-enables WAL, and checks that we now can write WAL. There are\n> probably a bunch of other things that we should test, too.\n>\n\nYes, my next plan is to work on the TAP tests and look into the patch\nposted by Prabhat to improve test coverage.\n\nRegards,\nAmul Sul",
"msg_date": "Wed, 4 Aug 2021 18:26:30 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is the rebased version for the latest master head. Also,\nadded tap tests to test some part of this feature and a separate patch\nto test recovery_end_command execution.\n\nI have also been through Prabhat's patch which helps me to write\ncurrent tests, but I am not sure about the few basic tests that he\nincluded in the tap test which can be done using pg_regress otherwise,\ne.g. checking permission to execute the pg_prohibit_wal() function.\nThose basic tests I am yet to add, is it ok to add those tests in\npg_regress instead of TAP? The problem I see is that all the tests\ncovering a feature will not be together, which I think is not correct.\n\nWhat is usual practice, can have a few tests in TAP and a few in\npg_regress for the same feature?\n\nRegards,\nAmul\n\n\n\n\n\nOn Wed, Aug 4, 2021 at 6:26 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Attached is the rebase version on top of the latest master head\n> includes refactoring patches posted by Robert.\n>\n> On Thu, Jul 29, 2021 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Jul 28, 2021 at 7:33 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > I was too worried about how I could miss that & after thinking more\n> > > about that, I realized that the operation for ArchiveRecoveryRequested\n> > > is never going to be skipped in the startup process and that never\n> > > left for the checkpoint process to do that later. That is the reason\n> > > that assert was added there.\n> > >\n> > > When ArchiveRecoveryRequested, the server will no longer be in\n> > > the wal prohibited mode, we implicitly change the state to\n> > > wal-permitted. Here is the snip from the 0003 patch:\n> >\n> > Ugh, OK. That makes sense, but I'm still not sure that I like it. I've\n> > kind of been wondering: why not have XLogAcceptWrites() be the\n> > responsibility of the checkpointer all the time, in every case? That\n> > would require fixing some more things, and this is one of them, but\n> > then it would be consistent, which means that any bugs would be likely\n> > to get found and fixed. If calling XLogAcceptWrites() from the\n> > checkpointer is some funny case that only happens when the system\n> > crashes while WAL is prohibited, then we might fail to notice that we\n> > have a bug.\n> >\n>\n> Unfortunately, I didn't get much time to think about this and don't\n> have a strong opinion on it either.\n>\n> > This is especially true given that we have very little test coverage\n> > in this area. Andres was ranting to me about this earlier this week,\n> > and I wasn't sure he was right, but then I noticed that we have\n> > exactly zero tests in the entire source tree that make use of\n> > recovery_end_command. We really need a TAP test for that, I think.\n> > It's too scary to do much reorganization of the code without having\n> > any tests at all for the stuff we're moving around. Likewise, we're\n> > going to need TAP tests for the stuff that is specific to this patch.\n> > For example, we should have a test that crashes the server while it's\n> > read only, brings it back up, checks that we still can't write WAL,\n> > then re-enables WAL, and checks that we now can write WAL. There are\n> > probably a bunch of other things that we should test, too.\n> >\n>\n> Yes, my next plan is to work on the TAP tests and look into the patch\n> posted by Prabhat to improve test coverage.\n>\n> Regards,\n> Amul Sul",
"msg_date": "Tue, 31 Aug 2021 17:45:40 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Aug 31, 2021 at 8:16 AM Amul Sul <sulamul@gmail.com> wrote:\n> Attached is the rebased version for the latest master head. Also,\n> added tap tests to test some part of this feature and a separate patch\n> to test recovery_end_command execution.\n\nIt looks like you haven't given any thought to writing that in a way\nthat will work on Windows?\n\n> What is usual practice, can have a few tests in TAP and a few in\n> pg_regress for the same feature?\n\nSure, there's no problem with that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 Sep 2021 11:47:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "\n\n> On Aug 31, 2021, at 5:15 AM, Amul Sul <sulamul@gmail.com> wrote:\n> \n> Attached is the rebased version for the latest master head. \n\nHi Amul!\n\nCould you please rebase again?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 7 Sep 2021 08:13:52 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, 7 Sep 2021 at 8:43 PM, Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Aug 31, 2021, at 5:15 AM, Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Attached is the rebased version for the latest master head.\n>\n> Hi Amul!\n>\n> Could you please rebase again?\n>\n\nOk will do that tomorrow, thanks.\n\nRegards,\nAmul\n\nOn Tue, 7 Sep 2021 at 8:43 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Aug 31, 2021, at 5:15 AM, Amul Sul <sulamul@gmail.com> wrote:\n> \n> Attached is the rebased version for the latest master head. \n\nHi Amul!\n\nCould you please rebase again?\nOk will do that tomorrow, thanks.Regards,Amul",
"msg_date": "Tue, 7 Sep 2021 22:02:32 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Sep 7, 2021 at 10:02 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n>\n>\n> On Tue, 7 Sep 2021 at 8:43 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>\n>>\n>>\n>> > On Aug 31, 2021, at 5:15 AM, Amul Sul <sulamul@gmail.com> wrote:\n>> >\n>> > Attached is the rebased version for the latest master head.\n>>\n>> Hi Amul!\n>>\n>> Could you please rebase again?\n>\n>\n> Ok will do that tomorrow, thanks.\n>\n\nHere is the rebased version. I have added a few more test cases,\nperhaps needing more tests and optimization to it, that I'll try in\nthe next version. I dropped the patch for recovery_end_command\ntesting & will post that separately.\n\nRegards,\nAmul",
"msg_date": "Wed, 8 Sep 2021 19:14:54 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "\n\n> On Sep 8, 2021, at 6:44 AM, Amul Sul <sulamul@gmail.com> wrote:\n> \n> Here is the rebased version.\n\nv33-0004 \n\nThis patch moves the include of \"catalog/pg_control.h\" from transam/xlog.c into access/xlog.h, making pg_control.h indirectly included from a much larger set of files. Maybe that's ok. I don't know. But it seems you are doing this merely to get the symbol (not even the definition) for struct DBState. I'd recommend rearranging the code so this isn't necessary, but otherwise you'd at least want to remove the now redundant includes of catalog/pg_control.h from xlogdesc.c, xloginsert.c, auth-scram.c, postmaster.c, misc/pg_controldata.c, and pg_controldata/pg_controldata.c.\n\nv33-0005 \n\nThis patch makes bool XLogInsertAllowed() more complicated than before. The result used to depend mostly on the value of LocalXLogInsertAllowed except that when that value was negative, the result was determined by RecoveryInProgress(). There was an arcane rule that LocalXLogInsertAllowed must have the non-negative values binary coercible to boolean \"true\" and \"false\", with the basis for that rule being the coding of XLogInsertAllowed(). Now that the function is more complicated, this rule seems even more arcane. Can we change the logic to not depend on casting an integer to bool?\n\nThe code comment change in autovacuum.c introduces a non-grammatical sentence: \"First, the system is not read only i.e. wal writes permitted\".\n\nThe function comment in checkpointer.c reads more like it toggles the system into allowing something, rather than actually doing that same something: \"SendSignalToCheckpointer allows a process to send a signal to the checkpoint process\".\n\nThe new code comment in ipci.c contains a typo, but more importantly, it doesn't impart any knowledge beyond what a reader of the function name could already surmise. Perhaps the comment can better clarify what is happening: \"Set up wal probibit shared state\"\n\nThe new code comment in sync.c copies and changes a nearby comment but drops part of the verb phrase: \"As in ProcessSyncRequests, we don't want to stop wal prohibit change requests\". The nearby comment reads \"stop absorbing\". I think this one should read \"stop processing\". This same comment is used again below. Then a third comment reads \"For the same reason mentioned previously for the wal prohibit state change request check.\" That third comment is too glib.\n\ntcop/utility.c needlessly includes \"access/walprohibit.h\"\n\nwait_event.h extends enum WaitEventIO with new values WAIT_EVENT_WALPROHIBIT_STATE and WAIT_EVENT_WALPROHIBIT_STATE_CHANGE. I don't find the difference between these two names at all clear. Waiting for a state change is clear enough. But how is waiting on a state different?\n\nxlog.h defines a new enum. I don't find any of it clear; not the comment, nor the name of the enum, nor the names of the values:\n\n/* State of work that enables wal writes */\ntypedef enum XLogAcceptWritesState\n{\n XLOG_ACCEPT_WRITES_PENDING = 0, /* initial state, not started */\n XLOG_ACCEPT_WRITES_SKIPPED, /* skipped wal writes */\n XLOG_ACCEPT_WRITES_DONE /* wal writes are enabled */\n} XLogAcceptWritesState; \n\nThis enum seems to have been written from the point of view of someone who already knew what it was for. It needs to be written in a way that will be clear to people who have no idea what it is for.\n\nv33-0006:\n\nThe new code comments in brin.c and elsewhere should use the verb \"require\" rather than \"have\", otherwise \"building indexes\" reads as a noun phrase rather than as a gerund: /* Building indexes will have an XID */\n\nThe new function CheckWALPermitted() seems to test the current state of variables but not lock any of them, and the new function comment says:\n\n/*\n * In opposite to the above assertion if a transaction doesn't have valid XID\n * (e.g. VACUUM) then it won't be killed while changing the system state to WAL\n * prohibited. Therefore, we need to explicitly error out before entering into\n * the critical section.\n */\n\nThis suggests to me that a vacuum process can check whether wal is prohibited, then begin a critical section which needs wal to be allowed, and concurrently somebody else might disable wal without killing the vacuum process. I'm given to wonder what horrors await when the vacuum process does something that needs to be wal logged but cannot be. Does it trigger a panic? I don't like the idea that calling pg_prohibit_wal durning a vacuum might panic the cluster. If there is some reason this is not a problem, I think the comment should explain it. In particular, why is it sufficient to check whether wal is prohibited before entering the critical section and not necessary to be sure it remains allowed through the lifetime of that critical section?\n\nv33-0007:\n\nI don't really like what the documentation has to say about pg_prohibit_wal. Why should pg_prohibit_wal differ from other signal sending functions in whether it returns a boolean? If you believe it must always succeed, you can still define it as returning a boolean and always return true. That leaves the door open to future code changes which might need to return false for some reason.\n\nBut I also don't like the idea that existing transactions with xids are immediately killed. Shouldn't this function take an optional timeout, perhaps defaulting to none, but otherwise allowing the user to put the system into WALPROHIBIT_STATE_GOING_READ_ONLY for a period of time before killing remaining transactions?\n\nWhy is this function defined to take a boolean such that pg_prohibit_wal(true) means to prohibit wal and pg_prohibit_wal(false) means to allow wal. Wouldn't a different function named pg_allow_wal() make it more clear? This also would be a better interface if taking the system read-only had a timeout as I suggested above, as such a timeout parameter when allowing wal is less clearly useful.\n\nThat's enough code review for now. Next I will review your regression tests....\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 9 Sep 2021 10:42:11 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Sep 9, 2021 at 1:42 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> v33-0006:\n>\n> The new code comments in brin.c and elsewhere should use the verb \"require\" rather than \"have\", otherwise \"building indexes\" reads as a noun phrase rather than as a gerund: /* Building indexes will have an XID */\n\nHonestly that sentence doesn't sound very clear even with a different verb.\n\n> This suggests to me that a vacuum process can check whether wal is prohibited, then begin a critical section which needs wal to be allowed, and concurrently somebody else might disable wal without killing the vacuum process. I'm given to wonder what horrors await when the vacuum process does something that needs to be wal logged but cannot be. Does it trigger a panic? I don't like the idea that calling pg_prohibit_wal durning a vacuum might panic the cluster. If there is some reason this is not a problem, I think the comment should explain it. In particular, why is it sufficient to check whether wal is prohibited before entering the critical section and not necessary to be sure it remains allowed through the lifetime of that critical section?\n\nThe idea here is that if a transaction already has an XID assigned, we\nhave to kill it off before we can declare the system read-only,\nbecause it will definitely write WAL when the transaction ends: either\na commit record, or an abort record, but definitely something. So\ncases where we write WAL without necessarily having an XID require\nspecial handling. They have to check whether WAL has become prohibited\nand error out if so, and they need to do so before entering the\ncritical section - because if the problem were detected for the first\ntime inside the critical section it would escalate to a PANIC, which\nwe do not want. Places where we're guaranteed to have an XID - e.g.\ninserting a heap tuple - don't need a run-time check before entering\nthe critical section, because the code can't be reached in the first\nplace if the system is WAL-read-only.\n\n> Why is this function defined to take a boolean such that pg_prohibit_wal(true) means to prohibit wal and pg_prohibit_wal(false) means to allow wal. Wouldn't a different function named pg_allow_wal() make it more clear? This also would be a better interface if taking the system read-only had a timeout as I suggested above, as such a timeout parameter when allowing wal is less clearly useful.\n\nHmm, I find pg_prohibit_wal(true/false) better than pg_prohibit_wal()\nand pg_allow_wal(), and would prefer pg_prohibit_wal(true/false,\ntimeout) over pg_prohibit_wal(timeout) and pg_allow_wal(), because I\nthink then once you find that one function you know how to do\neverything about that feature, whereas the other way you need to find\nboth functions to have the whole story. That said, I can see why\nsomebody else might prefer something else.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Sep 2021 14:21:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "\n\n> On Sep 9, 2021, at 11:21 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> They have to check whether WAL has become prohibited\n> and error out if so, and they need to do so before entering the\n> critical section - because if the problem were detected for the first\n> time inside the critical section it would escalate to a PANIC, which\n> we do not want.\n\nBut that is the part that is still not clear. Should the comment say that a concurrent change to prohibit wal after the current process checks but before the current process exists the critical section will result in a panic? What is unclear about the comment is that it implies that a check before the critical section is sufficient, but ordinarily one would expect a lock to be held and the check-and-lock dance to carefully avoid any race condition. If somehow this is safe, the logic for why it is safe should be spelled out. If not, a mia culpa saying, \"hey, were not terribly safe about this\" should be explicit in the comment.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 9 Sep 2021 11:49:34 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Sep 9, 2021 at 11:12 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n\nThank you, for looking at the patch. Please see my reply inline below:\n\n>\n> > On Sep 8, 2021, at 6:44 AM, Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Here is the rebased version.\n>\n> v33-0004\n>\n> This patch moves the include of \"catalog/pg_control.h\" from transam/xlog.c into access/xlog.h, making pg_control.h indirectly included from a much larger set of files. Maybe that's ok. I don't know. But it seems you are doing this merely to get the symbol (not even the definition) for struct DBState. I'd recommend rearranging the code so this isn't necessary, but otherwise you'd at least want to remove the now redundant includes of catalog/pg_control.h from xlogdesc.c, xloginsert.c, auth-scram.c, postmaster.c, misc/pg_controldata.c, and pg_controldata/pg_controldata.c.\n>\n\nYes, you are correct, xlog.h is included in more than 150 files. I was\nwondering if we can have a forward declaration instead of including\npg_control.h (e.g. The same way struct XLogRecData was declared in\nxlog.h). Perhaps, DBState is enum & I don't see we have done the same\nfor enum elsewhere as we are doing for structures, but that seems to\nbe fine, IMO.\n\nEarlier, I was unsure before preparing this patch, but since that\nmakes sense (I assumed) and minimizes duplications, can we go ahead\nand post separately with the same change in StartupXLOG() which I have\nskipped for the same reason mentioned in patch commit-msg.\n\n> v33-0005\n>\n> This patch makes bool XLogInsertAllowed() more complicated than before. The result used to depend mostly on the value of LocalXLogInsertAllowed except that when that value was negative, the result was determined by RecoveryInProgress(). There was an arcane rule that LocalXLogInsertAllowed must have the non-negative values binary coercible to boolean \"true\" and \"false\", with the basis for that rule being the coding of XLogInsertAllowed(). Now that the function is more complicated, this rule seems even more arcane. Can we change the logic to not depend on casting an integer to bool?\n>\n\nWe can't use a boolean variable because LocalXLogInsertAllowed\nrepresents three states as, 1 means \"wal is allowed'', 0 for \"wal is\ndisallowed\", and -1 is for \"need to check\".\n\n> The code comment change in autovacuum.c introduces a non-grammatical sentence: \"First, the system is not read only i.e. wal writes permitted\".\n>\n> The function comment in checkpointer.c reads more like it toggles the system into allowing something, rather than actually doing that same something: \"SendSignalToCheckpointer allows a process to send a signal to the checkpoint process\".\n>\n> The new code comment in ipci.c contains a typo, but more importantly, it doesn't impart any knowledge beyond what a reader of the function name could already surmise. Perhaps the comment can better clarify what is happening: \"Set up wal probibit shared state\"\n>\n> The new code comment in sync.c copies and changes a nearby comment but drops part of the verb phrase: \"As in ProcessSyncRequests, we don't want to stop wal prohibit change requests\". The nearby comment reads \"stop absorbing\". I think this one should read \"stop processing\". This same comment is used again below. Then a third comment reads \"For the same reason mentioned previously for the wal prohibit state change request check.\" That third comment is too glib.\n>\n> tcop/utility.c needlessly includes \"access/walprohibit.h\"\n>\n> wait_event.h extends enum WaitEventIO with new values WAIT_EVENT_WALPROHIBIT_STATE and WAIT_EVENT_WALPROHIBIT_STATE_CHANGE. I don't find the difference between these two names at all clear. Waiting for a state change is clear enough. But how is waiting on a state different?\n>\n> xlog.h defines a new enum. I don't find any of it clear; not the comment, nor the name of the enum, nor the names of the values:\n>\n> /* State of work that enables wal writes */\n> typedef enum XLogAcceptWritesState\n> {\n> XLOG_ACCEPT_WRITES_PENDING = 0, /* initial state, not started */\n> XLOG_ACCEPT_WRITES_SKIPPED, /* skipped wal writes */\n> XLOG_ACCEPT_WRITES_DONE /* wal writes are enabled */\n> } XLogAcceptWritesState;\n>\n> This enum seems to have been written from the point of view of someone who already knew what it was for. It needs to be written in a way that will be clear to people who have no idea what it is for.\n>\n> v33-0006:\n>\n> The new code comments in brin.c and elsewhere should use the verb \"require\" rather than \"have\", otherwise \"building indexes\" reads as a noun phrase rather than as a gerund: /* Building indexes will have an XID */\n>\n\nWill try to think about the pointed code comments for the improvements.\n\n> The new function CheckWALPermitted() seems to test the current state of variables but not lock any of them, and the new function comment says:\n>\n\nCheckWALPermitted() calls XLogInsertAllowed() does check the\nLocalXLogInsertAllowed flag which is local to that process only, and\nnobody else reads that concurrently.\n\n> /*\n> * In opposite to the above assertion if a transaction doesn't have valid XID\n> * (e.g. VACUUM) then it won't be killed while changing the system state to WAL\n> * prohibited. Therefore, we need to explicitly error out before entering into\n> * the critical section.\n> */\n>\n> This suggests to me that a vacuum process can check whether wal is prohibited, then begin a critical section which needs wal to be allowed, and concurrently somebody else might disable wal without killing the vacuum process. I'm given to wonder what horrors await when the vacuum process does something that needs to be wal logged but cannot be. Does it trigger a panic? I don't like the idea that calling pg_prohibit_wal durning a vacuum might panic the cluster. If there is some reason this is not a problem, I think the comment should explain it. In particular, why is it sufficient to check whether wal is prohibited before entering the critical section and not necessary to be sure it remains allowed through the lifetime of that critical section?\n>\n\nHm, interrupts absorption are disabled inside the critical section.\nThe wal prohibited state for that process (here vacuum) will never get\nset until it sees the interrupts & the system will not be said wal\nprohibited until every process sees that interrupts. I am not sure we\nshould explain the characteristics of the critical section at this\nplace, if want, we can add a brief saying that inside the critical\nsection we should not worry about the state change which never happens\nbecause interrupts are disabled there.\n\n> v33-0007:\n>\n> I don't really like what the documentation has to say about pg_prohibit_wal. Why should pg_prohibit_wal differ from other signal sending functions in whether it returns a boolean? If you believe it must always succeed, you can still define it as returning a boolean and always return true. That leaves the door open to future code changes which might need to return false for some reason.\n>\n\nOk, I am fine to always return true.\n\n> But I also don't like the idea that existing transactions with xids are immediately killed. Shouldn't this function take an optional timeout, perhaps defaulting to none, but otherwise allowing the user to put the system into WALPROHIBIT_STATE_GOING_READ_ONLY for a period of time before killing remaining transactions?\n>\n\nOk, will check.\n\n> Why is this function defined to take a boolean such that pg_prohibit_wal(true) means to prohibit wal and pg_prohibit_wal(false) means to allow wal. Wouldn't a different function named pg_allow_wal() make it more clear? This also would be a better interface if taking the system read-only had a timeout as I suggested above, as such a timeout parameter when allowing wal is less clearly useful.\n>\n\nLike Robert, I am too inclined to have a single function that is easy\nto remember. Apart from this, recently while testing this patch with\npgbench where I have exhausted the connection limit and want to change\nthe system's prohibited state in between but I was unable to do that,\nI wish I could do that using the pg_clt option. How about having a\npg_clt option to alter wal prohibited state?\n\n> That's enough code review for now. Next I will review your regression tests....\n>\nThanks again.\n\n\n",
"msg_date": "Fri, 10 Sep 2021 20:06:30 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "\n\n> On Sep 10, 2021, at 7:36 AM, Amul Sul <sulamul@gmail.com> wrote:\n> \n>> v33-0005\n>> \n>> This patch makes bool XLogInsertAllowed() more complicated than before. The result used to depend mostly on the value of LocalXLogInsertAllowed except that when that value was negative, the result was determined by RecoveryInProgress(). There was an arcane rule that LocalXLogInsertAllowed must have the non-negative values binary coercible to boolean \"true\" and \"false\", with the basis for that rule being the coding of XLogInsertAllowed(). Now that the function is more complicated, this rule seems even more arcane. Can we change the logic to not depend on casting an integer to bool?\n>> \n> \n> We can't use a boolean variable because LocalXLogInsertAllowed\n> represents three states as, 1 means \"wal is allowed'', 0 for \"wal is\n> disallowed\", and -1 is for \"need to check\".\n\nI'm complaining that we're using an integer rather than an enum. I'm ok if we define it so that WAL_ALLOWABLE_UNKNOWN = -1, WAL_DISALLOWED = 0, WAL_ALLOWED = 1 or such, but the logic of the function has gotten complicated enough that having to remember which number represents which logical condition has become a (small) mental burden. Given how hard the WAL code is to read and fully grok, I'd rather avoid any unnecessary burden, even small ones.\n\n>> The new function CheckWALPermitted() seems to test the current state of variables but not lock any of them, and the new function comment says:\n>> \n> \n> CheckWALPermitted() calls XLogInsertAllowed() does check the\n> LocalXLogInsertAllowed flag which is local to that process only, and\n> nobody else reads that concurrently.\n> \n>> /*\n>> * In opposite to the above assertion if a transaction doesn't have valid XID\n>> * (e.g. VACUUM) then it won't be killed while changing the system state to WAL\n>> * prohibited. Therefore, we need to explicitly error out before entering into\n>> * the critical section.\n>> */\n>> \n>> This suggests to me that a vacuum process can check whether wal is prohibited, then begin a critical section which needs wal to be allowed, and concurrently somebody else might disable wal without killing the vacuum process. I'm given to wonder what horrors await when the vacuum process does something that needs to be wal logged but cannot be. Does it trigger a panic? I don't like the idea that calling pg_prohibit_wal durning a vacuum might panic the cluster. If there is some reason this is not a problem, I think the comment should explain it. In particular, why is it sufficient to check whether wal is prohibited before entering the critical section and not necessary to be sure it remains allowed through the lifetime of that critical section?\n>> \n> \n> Hm, interrupts absorption are disabled inside the critical section.\n> The wal prohibited state for that process (here vacuum) will never get\n> set until it sees the interrupts & the system will not be said wal\n> prohibited until every process sees that interrupts. I am not sure we\n> should explain the characteristics of the critical section at this\n> place, if want, we can add a brief saying that inside the critical\n> section we should not worry about the state change which never happens\n> because interrupts are disabled there.\n\nI think the fact that interrupts are disabled during critical sections is understood, so there is no need to mention that. The problem is that the method for taking the system read-only is less generally known, and readers of other sections of code need to jump to the definition of CheckWALPermitted to read the comments and understand what it does. Take for example a code stanza from heapam.c:\n\n if (needwal)\n CheckWALPermitted();\n\n /* NO EREPORT(ERROR) from here till changes are logged */\n START_CRIT_SECTION();\n\nNow, I know that interrupts won't be processed after starting the critical section, but I can see plain as day that an interrupt might get processed *during* CheckWALPermitted, since that function isn't atomic. It might happen after the check is meaningfully finished but before the function actually returns. So I'm not inclined to believe that the way this all works is dependent on interrupts being blocked. So I think, maybe this is all protected by some other scheme. But what? It's not clear from the code comments for CheckWALPermitted, so I'm left having to reverse engineer the system to understand it.\n\nOne interpretation is that the signal handler will exit() my backend if it receives a signal saying that the system is going read-only, so there is no race condition. But then why the call to CheckWALPermitted()? If this interpretation were correct, we'd happily enter the critical section without checking, secure in the knowledge that as long as we haven't exited yet, all is ok.\n\nAnother interpretation is that the whole thing is just a performance trick. Maybe we're ok with the idea that we will occasionally miss the fact that wal is prohibited, do whatever work we need in the critical section, and then fail later. But if that is true, it had better not be a panic, because designing the system to panic 1% of the time (or whatever percent it works out to be) isn't project style. So looking into the critical section in the heapam.c code, I see:\n\n XLogBeginInsert();\n XLogRegisterData((char *) &xlrec, SizeOfHeapInplace);\n \n XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);\n XLogRegisterBufData(0, (char *) htup + htup->t_hoff, newlen);\n\nAnd jumping to the definition of XLogBeginInsert() I see\n\n /* \n * WAL permission must have checked before entering the critical section.\n * Otherwise, WAL prohibited error will force system panic.\n */\n\nSo now I'm flummoxed. Is it that the code is broken, or is it that I don't know what the strategy behind all this is? If there were a code comment saying how this all works, I'd be in a better position to either know that it is truly safe or alternately know that the strategy is wrong.\n\nEven if my analysis that this is all flawed is incorrect, I still think that a code comment would help.\n\n>> v33-0007:\n>> \n>> I don't really like what the documentation has to say about pg_prohibit_wal. Why should pg_prohibit_wal differ from other signal sending functions in whether it returns a boolean? If you believe it must always succeed, you can still define it as returning a boolean and always return true. That leaves the door open to future code changes which might need to return false for some reason.\n>> \n> \n> Ok, I am fine to always return true.\n\nOk.\n\n>> But I also don't like the idea that existing transactions with xids are immediately killed. Shouldn't this function take an optional timeout, perhaps defaulting to none, but otherwise allowing the user to put the system into WALPROHIBIT_STATE_GOING_READ_ONLY for a period of time before killing remaining transactions?\n>> \n> \n> Ok, will check.\n> \n>> Why is this function defined to take a boolean such that pg_prohibit_wal(true) means to prohibit wal and pg_prohibit_wal(false) means to allow wal. Wouldn't a different function named pg_allow_wal() make it more clear? This also would be a better interface if taking the system read-only had a timeout as I suggested above, as such a timeout parameter when allowing wal is less clearly useful.\n>> \n> \n> Like Robert, I am too inclined to have a single function that is easy\n> to remember.\n\nFor C language functions that take a bool argument, I can jump to the definition using ctags, and I assume most other developers can do so using whatever IDE they like. For SQL functions, it's a bit harder to jump to the definition, particularly if you are logged into a production server where non-essential software is intentionally missing. Then you have to wonder, what exactly is the boolean argument toggling here?\n\nI don't feel strongly about this, though, and you don't need to change it.\n\n> Apart from this, recently while testing this patch with\n> pgbench where I have exhausted the connection limit and want to change\n> the system's prohibited state in between but I was unable to do that,\n> I wish I could do that using the pg_clt option. How about having a\n> pg_clt option to alter wal prohibited state?\n\nI'd have to review the implementation, but sure, that sounds like a useful ability.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 10 Sep 2021 08:42:09 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "\n\n> On Sep 10, 2021, at 8:42 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Take for example a code stanza from heapam.c:\n> \n> if (needwal)\n> CheckWALPermitted();\n> \n> /* NO EREPORT(ERROR) from here till changes are logged */\n> START_CRIT_SECTION();\n> \n> Now, I know that interrupts won't be processed after starting the critical section, but I can see plain as day that an interrupt might get processed *during* CheckWALPermitted, since that function isn't atomic. \n\nA better example may be found in ginmetapage.c:\n\n needwal = RelationNeedsWAL(indexrel);\n if (needwal)\n {\n CheckWALPermitted();\n computeLeafRecompressWALData(leaf);\n }\n\n /* Apply changes to page */\n START_CRIT_SECTION();\n\nEven if CheckWALPermitted is assumed to be close enough to atomic to not be a problem (I don't agree), that argument can't be made here, as computeLeafRecompressWALData is not trivial and signals could easily be processed while it is running.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 10 Sep 2021 09:20:38 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 12:20 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> A better example may be found in ginmetapage.c:\n>\n> needwal = RelationNeedsWAL(indexrel);\n> if (needwal)\n> {\n> CheckWALPermitted();\n> computeLeafRecompressWALData(leaf);\n> }\n>\n> /* Apply changes to page */\n> START_CRIT_SECTION();\n\nYeah, that looks sketchy. Why not move CheckWALPermitted() down a line?\n\n> Even if CheckWALPermitted is assumed to be close enough to atomic to not be a problem (I don't agree), that argument can't be made here, as computeLeafRecompressWALData is not trivial and signals could easily be processed while it is running.\n\nI think the relevant question here is not \"could a signal handler\nfire?\" but \"can we hit a CHECK_FOR_INTERRUPTS()?\". If the relevant\nquestion is the former, then there's no hope of ever making it work\nbecause there's always a race condition. But the signal handler is\nonly setting flags whose only effect is to make a subsequent\nCHECK_FOR_INTERRUPTS() do something, so it doesn't really matter when\nthe signal handler can run, but when CHECK_FOR_INTERRUPTS() can call\nProcessInterrupts().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Sep 2021 12:56:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "\n\n> On Sep 10, 2021, at 9:56 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I think the relevant question here is not \"could a signal handler\n> fire?\" but \"can we hit a CHECK_FOR_INTERRUPTS()?\". If the relevant\n> question is the former, then there's no hope of ever making it work\n> because there's always a race condition. But the signal handler is\n> only setting flags whose only effect is to make a subsequent\n> CHECK_FOR_INTERRUPTS() do something, so it doesn't really matter when\n> the signal handler can run, but when CHECK_FOR_INTERRUPTS() can call\n> ProcessInterrupts().\n\nOk, that makes more sense. I was reviewing the code after first reviewing the documentation changes, which lead me to believe the system was designed to respond more quickly than that:\n\n+ WAL prohibited is a read-only system state. Any permitted user can call\n+ <function>pg_prohibit_wal</function> function to forces the system into\n+ a WAL prohibited mode where insert write ahead log will be prohibited until\n+ the same function executed to change that state to read-write. Like Hot\n\nand\n\n+ Otherwise, it will be <literal>off</literal>. When the user requests WAL\n+ prohibited state, at that moment if any existing session is already running\n+ a transaction, and that transaction has already been performed or planning\n+ to perform wal write operations then the session running that transaction\n+ will be terminated.\n\n\"forces the system\" in the first part, and \"at that moment ... that transaction will be terminated\" sounds heavier handed than something which merely sets a flag asking the backend to exit. I was reading that as more immediate and then trying to figure out how the signal handling could possibly work, and failing to see how.\n\nThe README:\n\n+Any\n+backends which receive WAL prohibited system state transition barrier interrupt\n+need to stop WAL writing immediately. For barrier absorption the backed(s) will\n+kill the running transaction which has valid XID indicates that the transaction\n+has performed and/or planning WAL write.\n\nuses \"immediately\" and \"will kill the running transaction\" which reenforced the impression that this mechanism is heavier handed than it is.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 10 Sep 2021 10:16:48 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 1:16 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> uses \"immediately\" and \"will kill the running transaction\" which reenforced the impression that this mechanism is heavier handed than it is.\n\nIt's intended to be just as immediate as e.g. pg_cancel_backend() and\npg_terminate_backend(), which work just the same way, but not any more\nso. I guess we could look at how things are worded in those cases.\n From a user perspective such things are usually pretty immediate, but\nnot as immediate as firing a signal handler. Computers are fast.[1]\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] https://www.youtube.com/watch?v=6xijhqU8r2A\n\n\n",
"msg_date": "Fri, 10 Sep 2021 13:48:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "> On Jun 16, 2020, at 6:55 AM, amul sul <sulamul@gmail.com> wrote:\n> \n> (2) if the session is idle, we also need the top-level abort\n> record to be written immediately, but can't send an error to the client until the next\n> command is issued without losing wire protocol synchronization. For now, we just use\n> FATAL to kill the session; maybe this can be improved in the future.\n\nAndres,\n\nI'd like to have a patch that tests the impact of a vacuum running for xid wraparound purposes, blocked on a pinned page held by the cursor, when another session disables WAL. It would be very interesting to test how the vacuum handles that specific change. I have not figured out the cleanest way to do this, though, as we don't as a project yet have a standard way of setting up xid exhaustion in a regression test, do we? The closest I saw to it was your work in [1], but that doesn't seem to have made much headway recently, and is designed for the TAP testing infrastructure, which isn't useable from inside an isolation test. Do you have a suggestion how best to continue developing out the test infrastructure?\n\n\nAmul,\n\nThe most obvious way to test how your ALTER SYSTEM READ ONLY feature interacts with concurrent sessions is using the isolation tester in src/test/isolation/, but as it stands now, the first permutation that gets a FATAL causes the test to abort and all subsequent permutations to not run. Attached patch v34-0009 fixes that.\n\nAttached patch v34-0010 adds a test of cursors opened FOR UPDATE interacting with a system that is set read-only by a different session. The expected output is worth reviewing to see how this plays out. I don't see anything in there which is obviously wrong, but some of it is a bit clunky. For example, by the time the client sees an error \"FATAL: WAL is now prohibited\", the system may already have switched back to read-write. Also, it is a bit strange to get one of these errors on an attempted ROLLBACK. Once again, not wrong as such, but clunky.\n\n\n\n\n\n\n\n\n[1] https://www.postgresql.org/message-id/flat/CAP4vRV5gEHFLB7NwOE6_dyHAeVfkvqF8Z_g5GaCQZNgBAE0Frw%40mail.gmail.com#e10861372aec22119b66756ecbac581c\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 14 Sep 2021 16:04:01 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": ", On Sat, Jul 24, 2021 at 1:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 1:23 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Attached is rebase for the latest master head. Also, I added one more\n> > refactoring code that deduplicates the code setting database state in the\n> > control file. The same code set the database state is also needed for this\n> > feature.\n>\n> I started studying 0001 today and found that it rearranged the order\n> of operations in StartupXLOG() more than I was expecting. It does, as\n> per previous discussions, move a bunch of things to the place where we\n> now call XLogParamters(). But, unsatisfyingly, InRecovery = false and\n> XLogReaderFree() then have to move down even further. Since the goal\n> here is to get to a situation where we sometimes XLogAcceptWrites()\n> after InRecovery = false, it didn't seem nice for this refactoring\n> patch to still end up with a situation where this stuff happens while\n> InRecovery = true. In fact, with the patch, the amount of code that\n> runs with InRecovery = true actually *increases*, which is not what I\n> think should be happening here. That's why the patch ends up having to\n> adjust SetMultiXactIdLimit to not Assert(!InRecovery).\n>\n> And then I started to wonder how this was ever going to work as part\n> of the larger patch set, because as you have it here,\n> XLogAcceptWrites() takes arguments XLogReaderState *xlogreader,\n> XLogRecPtr EndOfLog, and TimeLineID EndOfLogTLI and if the\n> checkpointer is calling that at a later time after the user issues\n> pg_prohibit_wal(false), it's going to have none of those things. So I\n> had a quick look at that part of the code and found this in\n> checkpointer.c:\n>\n> XLogAcceptWrites(true, NULL, InvalidXLogRecPtr, 0);\n>\n> For those following along from home, the additional \"true\" is a bool\n> needChkpt argument added to XLogAcceptWrites() by 0003. Well, none of\n> this is very satisfying. The whole purpose of passing the xlogreader\n> is so we can figure out whether we need a checkpoint (never mind the\n> question of whether the existing algorithm for determining that is\n> really sensible) but now we need a second argument that basically\n> serves the same purpose since one of the two callers to this function\n> won't have an xlogreader. And then we're passing the EndOfLog and\n> EndOfLogTLI as dummy values which seems like it's probably just\n> totally wrong, but if for some reason it works correctly there sure\n> don't seem to be any comments explaining why.\n>\n> So I started doing a bit of hacking myself and ended up with the\n> attached, which I think is not completely the right thing yet but I\n> think it's better than your version. I split this into three parts.\n> 0001 splits up the logic that currently decides whether to write an\n> end-of-recovery record or a checkpoint record and if the latter how\n> the checkpoint ought to be performed into two functions.\n> DetermineRecoveryXlogAction() figures out what we want to do, and\n> PerformRecoveryXlogAction() does it. It also moves the code to run\n> recovery_end_command and related stuff into a new function\n> CleanupAfterArchiveRecovery(). 0002 then builds on this by postponing\n> UpdateFullPageWrites(), PerformRecoveryXLogAction(), and\n> CleanupAfterArchiveRecovery() to just before we\n> XLogReportParameters(). Because of the refactoring done by 0001, this\n> is only a small amount of code movement. Because of the separation\n> between DetermineRecoveryXlogAction() and PerformRecoveryXlogAction(),\n> the latter doesn't need the xlogreader. So we can do\n> DetermineRecoveryXlogAction() at the same time as now, while the\n> xlogreader is available, and then we don't need it later when we\n> PerformRecoveryXlogAction(), because we already know what we need to\n> know. I think this is all fine as far as it goes.\n>\n> My 0003 is where I see some lingering problems. It creates\n> XLogAcceptWrites(), moves the appropriate stuff there, and doesn't\n> need the xlogreader. But it doesn't really solve the problem of how\n> checkpointer.c would be able to call this function with proper\n> arguments. It is at least better in not needing two arguments to\n> decide what to do, but how is checkpointer.c supposed to know what to\n> pass for xlogaction? Worse yet, how is checkpointer.c supposed to know\n> what to pass for EndOfLogTLI and EndOfLog? Actually, EndOfLog doesn't\n> seem too problematic, because that value has been stored in four (!)\n> places inside XLogCtl by this code:\n>\n> LogwrtResult.Write = LogwrtResult.Flush = EndOfLog;\n>\n> XLogCtl->LogwrtResult = LogwrtResult;\n>\n> XLogCtl->LogwrtRqst.Write = EndOfLog;\n> XLogCtl->LogwrtRqst.Flush = EndOfLog;\n>\n> Presumably we could relatively easily change things around so that we\n> finish one of those values ... probably one of the \"write\" values ..\n> back out of XLogCtl instead of passing it as a parameter. That would\n> work just as well from the checkpointer as from the startup process,\n> and there seems to be no way for the value to change until after\n> XLogAcceptWrites() has been called, so it seems fine. But that doesn't\n> help for the other arguments. What I'm thinking is that we should just\n> arrange to store EndOfLogTLI and xlogaction into XLogCtl also, and\n> then XLogAcceptWrites() can fish those values out of there as well,\n> which should be enough to make it work and do the same thing\n> regardless of which process is calling it. But I have run out of time\n> for today so have not explored coding that up.\n>\n\nI have spent some time thinking about making XLogAcceptWrites()\nindependent, and for that, we need to get rid of its arguments which\nare available only in the startup process. The first argument\nxlogaction deduced by the DetermineRecoveryXlogAction(). If we are able to\nmake this function logic independent and can deduce that xlogaction in\nany process, we can skip xlogaction argument passing.\n\nDetermineRecoveryXlogAction() function depends on a few global\nvariables, valid only in the startup process are InRecovery,\nArchiveRecoveryRequested & LocalPromoteIsTriggered. Out of\nthree LocalPromoteIsTriggered's value is already available in shared\nmemory and that can be fetched by calling LocalPromoteIsTriggered().\nInRecovery's value can be guessed by as long as DBState in the control\nfile doesn't get changed unexpectedly until XLogAcceptWrites()\nexecutes. If the DBState was not a clean shutdown, then surely the\nserver has gone through recovery. If we could rely on DBState in the\ncontrol file then we are good to go. For the last one,\nArchiveRecoveryRequested, I don't see any existing and appropriate\nshared memory or control file information, so that can be identified\nif the archive recovery was requested or not. Initially, I thought to\nuse SharedRecoveryState which is always set to RECOVERY_STATE_ARCHIVE,\nif the archive recovery requested. But there is another case where\nSharedRecoveryState could be RECOVERY_STATE_ARCHIVE irrespective of\nArchiveRecoveryRequested value, that is the presence of a backup label\nfile. If we want to use SharedRecoveryState, we need one more state\nwhich could differentiate between ArchiveRecoveryRequested and the\nbackup label file presence case. To move ahead, I have copied\nArchiveRecoveryRequested into shared memory and it will be cleared\nonce archive cleanup is finished. With all these changes, we could get\nrid of xlogaction argument and DetermineRecoveryXlogAction() function.\nCould move its logic to PerformRecoveryXLogAction() directly.\n\nNow, the remaining two arguments of XLogAcceptWrites() are required\nfor the CleanupAfterArchiveRecovery() function. Along with these two\narguments, this function requires ArchiveRecoveryRequested and\nThisTimeLineID which are again global variables. With the previous\nchanges, we have got ArchiveRecoveryRequested into shared memory.\nAnd for ThisTimeLineID, I don't think we need to do anything since this\nvalue is available with all the backend as per the following comment:\n\n\"\n/*\n * ThisTimeLineID will be same in all backends --- it identifies current\n * WAL timeline for the database system.\n */\nTimeLineID ThisTimeLineID = 0;\n\"\n\nIn addition to the four places that Robert has pointed for EndOfLog,\nXLogCtl->lastSegSwitchLSN also holds EndOfLog value and that doesn't\nseem to change until WAL write is enabled. For EndOfLogTLI, I think we\ncan safely use XLogCtl->replayEndTLI. Currently, The EndOfLogTLI is\nthe timeline ID of the last record that xlogreader reads, but this\nxlogreader was simply re-fetching the last record which we have\nreplied in redo loop if it was in recovery, if not in recovery, we\ndon't need to worry since this value is needed only in case of\nArchiveRecoveryRequested = true, which implicitly forces redo and sets\nreplayEndTLI value.\n\nWith all the above changes XLogAcceptWrites() can be called from other\nprocesses but I haven't tested that. This finding is still not\ncomplete and not too clean, perhaps, posting the patches with\naforesaid changes just to confirm the direction and forward the\ndiscussion, thanks.\n\nRegards,\nAmul",
"msg_date": "Wed, 15 Sep 2021 16:19:17 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 6:49 AM Amul Sul <sulamul@gmail.com> wrote:\n> Initially, I thought to\n> use SharedRecoveryState which is always set to RECOVERY_STATE_ARCHIVE,\n> if the archive recovery requested. But there is another case where\n> SharedRecoveryState could be RECOVERY_STATE_ARCHIVE irrespective of\n> ArchiveRecoveryRequested value, that is the presence of a backup label\n> file.\n\nRight, there's a difference between whether archive recovery has been\n*requested* and whether it is actually *happening*.\n\n> If we want to use SharedRecoveryState, we need one more state\n> which could differentiate between ArchiveRecoveryRequested and the\n> backup label file presence case. To move ahead, I have copied\n> ArchiveRecoveryRequested into shared memory and it will be cleared\n> once archive cleanup is finished. With all these changes, we could get\n> rid of xlogaction argument and DetermineRecoveryXlogAction() function.\n> Could move its logic to PerformRecoveryXLogAction() directly.\n\nPutting these changes into 0001 seems to make no sense. It seems like\nthey should be part of 0003, or maybe a new 0004 patch.\n\n> And for ThisTimeLineID, I don't think we need to do anything since this\n> value is available with all the backend as per the following comment:\n> \"\n> /*\n> * ThisTimeLineID will be same in all backends --- it identifies current\n> * WAL timeline for the database system.\n> */\n> TimeLineID ThisTimeLineID = 0;\n\nI'm not sure I find that argument totally convincing. The two\nvariables aren't assigned at exactly the same places in the code,\nnonwithstanding the comment. I'm not saying you're wrong. I'm just\nsaying I don't believe it just because the comment says so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Sep 2021 10:32:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 10:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Putting these changes into 0001 seems to make no sense. It seems like\n> they should be part of 0003, or maybe a new 0004 patch.\n\nAfter looking at this a little bit more, I think it's really necessary\nto separate out all of your changes into separate patches at least for\ninitial review. It's particularly important to separate code movement\nchanges from other kinds of changes. 0001 was just moving code before,\nand so was 0002, but now both are making other changes, which is not\neasy to see from looking at the 'git diff' output. For that reason\nit's not so easy to understand exactly what you've changed here and\nanalyze it.\n\nI poked around a little bit at these patches, looking for\nperhaps-interesting global variables upon which the code called from\nXLogAcceptWrites() would depend with your patches applied. The most\ninteresting ones seem to be (1) ThisTimeLineID, which you mentioned\nand which may be fine but I'm not totally convinced yet, (2)\nLocalXLogInsertAllowed, which is probably not broken but I'm thinking\nwe may want to redesign that mechanism somehow to make it cleaner, and\n(3) CheckpointStats, which is called from RemoveXlogFile which is\ncalled from RemoveNonParentXlogFiles which is called from\nCleanupAfterArchiveRecovery which is called from XLogAcceptWrites.\nThis last one is actually pretty weird already in the existing code.\nIt sort of looks like RemoveXlogFile() only expects to be called from\nthe checkpointer (or a standalone backend) so that it can update\nCheckpointStats and have that just work, but actually it's also called\nfrom the startup process when a timeline switch happens. I don't know\nwhether the fact that the increments to ckpt_segs_recycled get lost in\nthat case should be considered an intentional behavior that should be\npreserved or an inadvertent mistake.\n\nSo I think you've covered most of the necessary things here, with\nprobably some more discussion needed on whether you've done the right\nthings...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Sep 2021 12:08:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 9:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Sep 15, 2021 at 10:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Putting these changes into 0001 seems to make no sense. It seems like\n> > they should be part of 0003, or maybe a new 0004 patch.\n>\n> After looking at this a little bit more, I think it's really necessary\n> to separate out all of your changes into separate patches at least for\n> initial review. It's particularly important to separate code movement\n> changes from other kinds of changes. 0001 was just moving code before,\n> and so was 0002, but now both are making other changes, which is not\n> easy to see from looking at the 'git diff' output. For that reason\n> it's not so easy to understand exactly what you've changed here and\n> analyze it.\n>\n\nOk, understood, I have separated my changes into 0001 and 0002 patch,\nand the refactoring patches start from 0003.\n\nIn the 0001 patch, I have copied ArchiveRecoveryRequested to shared\nmemory as said previously. Coping ArchiveRecoveryRequested value to\nshared memory is not really interesting, and I think somehow we should\nreuse existing variable, (perhaps, with some modification of the\ninformation it can store, e.g. adding one more enum value for\nSharedRecoveryState or something else), thinking on the same.\n\nIn addition to that, I tried to turn down the scope of\nArchiveRecoveryRequested global variable. Now, this is a static\nvariable, and the scope is limited to xlog.c file like\nLocalXLogInsertAllowed and can be accessed through the newly added\nfunction ArchiveRecoveryIsRequested() (like PromoteIsTriggered()). Let\nme know what you think about the approach.\n\nIn 0002 patch is a mixed one where I tried to remove the dependencies\non global variables and local variables belonging to StartupXLOG(). I\nam still worried about the InRecovery value that needs to be deduced\nafterward inside XLogAcceptWrites(). Currently, relying on\nControlFile->state != DB_SHUTDOWNED check but I think that will not be\ngood for ASRO where we plan to skip XLogAcceptWrites() work only and\nlet the StartupXLOG() do the rest of the work as it is where it will\ngoing to update ControlFile's DBState to DB_IN_PRODUCTION, then we\nmight need some ugly kludge to call PerformRecoveryXLogAction() in\ncheckpointer irrespective of DBState, which makes me a bit\nuncomfortable.\n\n> I poked around a little bit at these patches, looking for\n> perhaps-interesting global variables upon which the code called from\n> XLogAcceptWrites() would depend with your patches applied. The most\n> interesting ones seem to be (1) ThisTimeLineID, which you mentioned\n> and which may be fine but I'm not totally convinced yet, (2)\n> LocalXLogInsertAllowed, which is probably not broken but I'm thinking\n> we may want to redesign that mechanism somehow to make it cleaner, and\n\nThanks for the off-list detailed explanation on this.\n\nFor somebody else who might be reading this, the concern here is (not\nreally a concern, it is a good thing to improve) the\nLocalSetXLogInsertAllowed() function call, is a kind of hack that\nenables wal writes irrespective of RecoveryInProgress() for a shorter\nperiod. E.g. see following code in StartupXLOG:\n\n\"\n LocalSetXLogInsertAllowed();\n UpdateFullPageWrites();\n LocalXLogInsertAllowed = -1;\n....\n....\n /*\n * If any of the critical GUCs have changed, log them before we allow\n * backends to write WAL.\n */\n LocalSetXLogInsertAllowed();\n XLogReportParameters();\n\"\n\nInstead of explicitly enabling wal insert, somehow that implicitly\nallowed for the startup process and/or the checkpointer doing the\nfirst checkpoint and/or wal writes after the recovery. Well, the\ncurrent LocalSetXLogInsertAllowed() mechanism is not really harming\nanything or bad and does not necessarily need to change but it would\nbe nice if we were able to come up with something much cleaner,\nbug-free, and 100% perfect enough design.\n\n(Hope I am not missing anything from the discussion).\n\n> (3) CheckpointStats, which is called from RemoveXlogFile which is\n> called from RemoveNonParentXlogFiles which is called from\n> CleanupAfterArchiveRecovery which is called from XLogAcceptWrites.\n> This last one is actually pretty weird already in the existing code.\n> It sort of looks like RemoveXlogFile() only expects to be called from\n> the checkpointer (or a standalone backend) so that it can update\n> CheckpointStats and have that just work, but actually it's also called\n> from the startup process when a timeline switch happens. I don't know\n> whether the fact that the increments to ckpt_segs_recycled get lost in\n> that case should be considered an intentional behavior that should be\n> preserved or an inadvertent mistake.\n>\n\nMaybe I could be wrong, but I think that is intentional. It removes\npre-allocated or bogus files of the old timeline which are not\nsupposed to be considered in stats. The comments for\nCheckpointStatsData might not be clear but comment at the calling\nRemoveNonParentXlogFiles() place inside StartupXLOG hints the same:\n\n\"\n/*\n * Before we continue on the new timeline, clean up any\n * (possibly bogus) future WAL segments on the old\n * timeline.\n */\nRemoveNonParentXlogFiles(EndRecPtr, ThisTimeLineID);\n....\n....\n\n * We switched to a new timeline. Clean up segments on the old\n * timeline.\n *\n * If there are any higher-numbered segments on the old timeline,\n * remove them. They might contain valid WAL, but they might also be\n * pre-allocated files containing garbage. In any case, they are not\n * part of the new timeline's history so we don't need them.\n */\nRemoveNonParentXlogFiles(EndOfLog, ThisTimeLineID);\n\"\n\n> So I think you've covered most of the necessary things here, with\n> probably some more discussion needed on whether you've done the right\n> things...\n>\n\nThanks, Robert, for your time.\n\nRegards,\nAmul Sul",
"msg_date": "Mon, 20 Sep 2021 20:49:50 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi Mark,\n\nI have tried to fix your review comment in the attached version,\nplease see my inline reply below.\n\nOn Fri, Sep 10, 2021 at 8:06 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Sep 9, 2021 at 11:12 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> >\n> >\n>\n> Thank you, for looking at the patch. Please see my reply inline below:\n>\n> >\n> > > On Sep 8, 2021, at 6:44 AM, Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > Here is the rebased version.\n> >\n> > v33-0004\n> >\n> > This patch moves the include of \"catalog/pg_control.h\" from transam/xlog.c into access/xlog.h, making pg_control.h indirectly included from a much larger set of files. Maybe that's ok. I don't know. But it seems you are doing this merely to get the symbol (not even the definition) for struct DBState. I'd recommend rearranging the code so this isn't necessary, but otherwise you'd at least want to remove the now redundant includes of catalog/pg_control.h from xlogdesc.c, xloginsert.c, auth-scram.c, postmaster.c, misc/pg_controldata.c, and pg_controldata/pg_controldata.c.\n> >\n>\n> Yes, you are correct, xlog.h is included in more than 150 files. I was\n> wondering if we can have a forward declaration instead of including\n> pg_control.h (e.g. The same way struct XLogRecData was declared in\n> xlog.h). Perhaps, DBState is enum & I don't see we have done the same\n> for enum elsewhere as we are doing for structures, but that seems to\n> be fine, IMO.\n>\n> Earlier, I was unsure before preparing this patch, but since that\n> makes sense (I assumed) and minimizes duplications, can we go ahead\n> and post separately with the same change in StartupXLOG() which I have\n> skipped for the same reason mentioned in patch commit-msg.\n>\n\nFYI, I have posted this patch separately [1] & drop it from the current set.\n\n> > v33-0005\n> > The code comment change in autovacuum.c introduces a non-grammatical sentence: \"First, the system is not read only i.e. wal writes permitted\".\n> >\n\nFixed.\n\n> > The function comment in checkpointer.c reads more like it toggles the system into allowing something, rather than actually doing that same something: \"SendSignalToCheckpointer allows a process to send a signal to the checkpoint process\".\n> >\n\nI am not sure I understood the concern, what comments should you\nthink? This function helps to signal the checkpointer, but doesn't\ntell what it is supposed to do.\n\n> > The new code comment in ipci.c contains a typo, but more importantly, it doesn't impart any knowledge beyond what a reader of the function name could already surmise. Perhaps the comment can better clarify what is happening: \"Set up wal probibit shared state\"\n> >\n\nDone.\n\n> > The new code comment in sync.c copies and changes a nearby comment but drops part of the verb phrase: \"As in ProcessSyncRequests, we don't want to stop wal prohibit change requests\". The nearby comment reads \"stop absorbing\". I think this one should read \"stop processing\". This same comment is used again below. Then a third comment reads \"For the same reason mentioned previously for the wal prohibit state change request check.\" That third comment is too glib.\n> >\n\nOk, \"stop processing\" is used. I think the third comment should be\nfine instead of coping the same again, however, I change that comment\na bit for more clarity as \"For the same reason mentioned previously\nfor the same function call\".\n\n> > tcop/utility.c needlessly includes \"access/walprohibit.h\"\n> >\n> > wait_event.h extends enum WaitEventIO with new values WAIT_EVENT_WALPROHIBIT_STATE and WAIT_EVENT_WALPROHIBIT_STATE_CHANGE. I don't find the difference between these two names at all clear. Waiting for a state change is clear enough. But how is waiting on a state different?\n> >\n\nWAIT_EVENT_WALPROHIBIT_STATE_CHANGE gets set in pg_prohibit_wal()\nwhile waiting for the system to prohibit state change.\nWAIT_EVENT_WALPROHIBIT_STATE is set for the checkpointer process when\nit sees the system is in a WAL PROHIBITED state & stops there. But I\nthink it makes sense to have only one, i.e.\nWAIT_EVENT_WALPROHIBIT_STATE_CHANGE. The same can be used for\ncheckpointer since it won't do anything until wal prohibited state\nchange.\n\nRemove WAIT_EVENT_WALPROHIBIT_STATE in the attached version.\n\n> > xlog.h defines a new enum. I don't find any of it clear; not the comment, nor the name of the enum, nor the names of the values:\n> >\n> > /* State of work that enables wal writes */\n> > typedef enum XLogAcceptWritesState\n> > {\n> > XLOG_ACCEPT_WRITES_PENDING = 0, /* initial state, not started */\n> > XLOG_ACCEPT_WRITES_SKIPPED, /* skipped wal writes */\n> > XLOG_ACCEPT_WRITES_DONE /* wal writes are enabled */\n> > } XLogAcceptWritesState;\n> >\n> > This enum seems to have been written from the point of view of someone who already knew what it was for. It needs to be written in a way that will be clear to people who have no idea what it is for.\n> >\n\nI tried to avoid the function name in the comment, since the enum name\npretty much resembles the XLogAcceptWrite() function name whose\nexecution state we are trying to track, but added now, that would be\nmuch clearer.\n\n> > v33-0006:\n> >\n> > The new code comments in brin.c and elsewhere should use the verb \"require\" rather than \"have\", otherwise \"building indexes\" reads as a noun phrase rather than as a gerund: /* Building indexes will have an XID */\n> >\n\nRephrased the comments but I think HAVE XID is much more appropriate\nthere because that assert function name ends with HaveXID.\n\nApart from this I have moved CheckWALPermitted() closer to\nSTART_CRIT_SECTION which you have pointed out in your other post and\nmade a few other changes. Note that patch numbers are changed, I have\nrebased my implementation on top of the under discussion refactoring\npatches which I have posted previously [2] and reattached the same\nhere to make CFbot continue its testing.\n\nNote that with the current version patch on the latest master head\ngetting one issue but can be seen sometimes only where one, the same\nINSERT query gets stuck, waiting for WALWriteLock in exclusive mode. I\nam not sure if it is due to my changes, but that is not occurring without\nmy patch. I am looking into that, just in case if anybody wants to\nknow more, I have attached the backtrace, pg_lock & ps output, see\nps-bt-pg_lock.out.text attached file.\n\nRegards,\nAmul\n\n1] https://postgr.es/m/CAAJ_b97nd_ghRpyFV9Djf9RLXkoTbOUqnocq11WGq9TisX09Fw@mail.gmail.com\n2] https://postgr.es/m/CAAJ_b96G-oBxDC3C7Y72ER09bsheGHOxBK1HXHVOyHNXjTDmcA@mail.gmail.com",
"msg_date": "Wed, 22 Sep 2021 18:18:30 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 4:34 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jun 16, 2020, at 6:55 AM, amul sul <sulamul@gmail.com> wrote:\n> >\n> > (2) if the session is idle, we also need the top-level abort\n> > record to be written immediately, but can't send an error to the client until the next\n> > command is issued without losing wire protocol synchronization. For now, we just use\n> > FATAL to kill the session; maybe this can be improved in the future.\n>\n> Andres,\n>\n> I'd like to have a patch that tests the impact of a vacuum running for xid wraparound purposes, blocked on a pinned page held by the cursor, when another session disables WAL. It would be very interesting to test how the vacuum handles that specific change. I have not figured out the cleanest way to do this, though, as we don't as a project yet have a standard way of setting up xid exhaustion in a regression test, do we? The closest I saw to it was your work in [1], but that doesn't seem to have made much headway recently, and is designed for the TAP testing infrastructure, which isn't useable from inside an isolation test. Do you have a suggestion how best to continue developing out the test infrastructure?\n>\n>\n> Amul,\n>\n> The most obvious way to test how your ALTER SYSTEM READ ONLY feature interacts with concurrent sessions is using the isolation tester in src/test/isolation/, but as it stands now, the first permutation that gets a FATAL causes the test to abort and all subsequent permutations to not run. Attached patch v34-0009 fixes that.\n>\n\nInteresting.\n\n> Attached patch v34-0010 adds a test of cursors opened FOR UPDATE interacting with a system that is set read-only by a different session. The expected output is worth reviewing to see how this plays out. I don't see anything in there which is obviously wrong, but some of it is a bit clunky. For example, by the time the client sees an error \"FATAL: WAL is now prohibited\", the system may already have switched back to read-write. Also, it is a bit strange to get one of these errors on an attempted ROLLBACK. Once again, not wrong as such, but clunky.\n>\n\nCan't we do the same in the TAP test? If the intention is only to test\nsession termination when the system changes to WAL are prohibited then\nthat I have added in the latest version, but that test does not\nreinitiate the same connection again, I think that is not possible\nthere too.\n\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 22 Sep 2021 18:44:57 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "\n\n> On Sep 22, 2021, at 6:14 AM, Amul Sul <sulamul@gmail.com> wrote:\n> \n>> Attached patch v34-0010 adds a test of cursors opened FOR UPDATE interacting with a system that is set read-only by a different session. The expected output is worth reviewing to see how this plays out. I don't see anything in there which is obviously wrong, but some of it is a bit clunky. For example, by the time the client sees an error \"FATAL: WAL is now prohibited\", the system may already have switched back to read-write. Also, it is a bit strange to get one of these errors on an attempted ROLLBACK. Once again, not wrong as such, but clunky.\n>> \n> \n> Can't we do the same in the TAP test? If the intention is only to test\n> session termination when the system changes to WAL are prohibited then\n> that I have added in the latest version, but that test does not\n> reinitiate the same connection again, I think that is not possible\n> there too.\n\nPerhaps you can point me to a TAP test that does this in a concise fashion. When I tried writing a TAP test for this, it was much longer than the equivalent isolation test spec.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 22 Sep 2021 06:29:09 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 6:59 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Sep 22, 2021, at 6:14 AM, Amul Sul <sulamul@gmail.com> wrote:\n> >\n> >> Attached patch v34-0010 adds a test of cursors opened FOR UPDATE interacting with a system that is set read-only by a different session. The expected output is worth reviewing to see how this plays out. I don't see anything in there which is obviously wrong, but some of it is a bit clunky. For example, by the time the client sees an error \"FATAL: WAL is now prohibited\", the system may already have switched back to read-write. Also, it is a bit strange to get one of these errors on an attempted ROLLBACK. Once again, not wrong as such, but clunky.\n> >>\n> >\n> > Can't we do the same in the TAP test? If the intention is only to test\n> > session termination when the system changes to WAL are prohibited then\n> > that I have added in the latest version, but that test does not\n> > reinitiate the same connection again, I think that is not possible\n> > there too.\n>\n> Perhaps you can point me to a TAP test that does this in a concise fashion. When I tried writing a TAP test for this, it was much longer than the equivalent isolation test spec.\n>\n\nYes, that is a bit longer, here is the snip from v35-0010 patch:\n\n+my $psql_timeout = IPC::Run::timer(60);\n+my ($mysession_stdin, $mysession_stdout, $mysession_stderr) = ('', '', '');\n+my $mysession = IPC::Run::start(\n+ [\n+ 'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',\n+ $node_primary->connstr('postgres')\n+ ],\n+ '<',\n+ \\$mysession_stdin,\n+ '>',\n+ \\$mysession_stdout,\n+ '2>',\n+ \\$mysession_stderr,\n+ $psql_timeout);\n+\n+# Write in transaction and get backend pid\n+$mysession_stdin .= q[\n+BEGIN;\n+INSERT INTO tab VALUES(7);\n+SELECT $$value-7-inserted-into-tab$$;\n+];\n+$mysession->pump until $mysession_stdout =~ /value-7-inserted-into-tab[\\r\\n]$/;\n+like($mysession_stdout, qr/value-7-inserted-into-tab/,\n+ 'started write transaction in a session');\n+$mysession_stdout = '';\n+$mysession_stderr = '';\n+\n+# Change to WAL prohibited\n+$node_primary->safe_psql('postgres', 'SELECT pg_prohibit_wal(true)');\n+is($node_primary->safe_psql('postgres', $show_wal_prohibited_query), 'on',\n+ 'server is changed to wal prohibited by another session');\n+\n+# Try to commit open write transaction.\n+$mysession_stdin .= q[\n+COMMIT;\n+];\n+$mysession->pump;\n+like($mysession_stderr, qr/FATAL: WAL is now prohibited/,\n+ 'session with open write transaction is terminated');\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 22 Sep 2021 19:09:04 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "\n\n> On Sep 22, 2021, at 6:39 AM, Amul Sul <sulamul@gmail.com> wrote:\n> \n> Yes, that is a bit longer, here is the snip from v35-0010 patch\n\nRight, that's longer, and only tests one interaction. The isolation spec I posted upthread tests multiple interactions between the session which uses cursors and the system going read-only. Whether the session using a cursor gets a FATAL, just an ERROR, or neither depends on where it is in the process of opening, using, closing and committing. I think that's interesting. If the implementation of the ALTER SESSION READ ONLY feature were to change in such a way as, for example, to make the attempt to open the cursor be a FATAL error, you'd see a change in the test output.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 22 Sep 2021 07:03:15 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 7:33 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Sep 22, 2021, at 6:39 AM, Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Yes, that is a bit longer, here is the snip from v35-0010 patch\n>\n> Right, that's longer, and only tests one interaction. The isolation spec I posted upthread tests multiple interactions between the session which uses cursors and the system going read-only. Whether the session using a cursor gets a FATAL, just an ERROR, or neither depends on where it is in the process of opening, using, closing and committing. I think that's interesting. If the implementation of the ALTER SESSION READ ONLY feature were to change in such a way as, for example, to make the attempt to open the cursor be a FATAL error, you'd see a change in the test output.\n>\n\nAgreed.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 23 Sep 2021 09:21:21 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Sep 20, 2021 at 11:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> Ok, understood, I have separated my changes into 0001 and 0002 patch,\n> and the refactoring patches start from 0003.\n\nI think it would be better in the other order, with the refactoring\npatches at the beginning of the series.\n\n> In the 0001 patch, I have copied ArchiveRecoveryRequested to shared\n> memory as said previously. Coping ArchiveRecoveryRequested value to\n> shared memory is not really interesting, and I think somehow we should\n> reuse existing variable, (perhaps, with some modification of the\n> information it can store, e.g. adding one more enum value for\n> SharedRecoveryState or something else), thinking on the same.\n>\n> In addition to that, I tried to turn down the scope of\n> ArchiveRecoveryRequested global variable. Now, this is a static\n> variable, and the scope is limited to xlog.c file like\n> LocalXLogInsertAllowed and can be accessed through the newly added\n> function ArchiveRecoveryIsRequested() (like PromoteIsTriggered()). Let\n> me know what you think about the approach.\n\nI'm not sure yet whether I like this or not, but it doesn't seem like\na terrible idea. You spelled UNKNOWN wrong, though, which does seem\nlike a terrible idea. :-) \"acccsed\" is not correct either.\n\nAlso, the new comments for ArchiveRecoveryRequested /\nARCHIVE_RECOVERY_REQUEST_* are really not very clear. All you did is\nsubstitute the new terminology into the existing comment, but that\nmeans that the purpose of the new \"unknown\" value is not at all clear.\n\nConsider the following two patch fragments:\n\n+ * SharedArchiveRecoveryRequested indicates whether an archive recovery is\n+ * requested. Protected by info_lck.\n...\n+ * Remember archive recovery request in shared memory state. A lock is not\n+ * needed since we are the only ones who updating this.\n\nThese two comments directly contradict each other.\n\n+ SpinLockAcquire(&XLogCtl->info_lck);\n+ XLogCtl->SharedArchiveRecoveryRequested = false;\n+ ArchiveRecoveryRequested = ARCHIVE_RECOVERY_REQUEST_UNKOWN;\n+ SpinLockRelease(&XLogCtl->info_lck);\n\nThis seems odd to me. In the first place, there doesn't seem to be any\nvalue in clearing this -- we're just expending extra CPU cycles to get\nrid of a value that wouldn't be used anyway. In the second place, if\nsomehow someone checked the value after this point, with this code,\nthey might get the wrong answer, whereas if you just deleted this,\nthey would get the right answer.\n\n> In 0002 patch is a mixed one where I tried to remove the dependencies\n> on global variables and local variables belonging to StartupXLOG(). I\n> am still worried about the InRecovery value that needs to be deduced\n> afterward inside XLogAcceptWrites(). Currently, relying on\n> ControlFile->state != DB_SHUTDOWNED check but I think that will not be\n> good for ASRO where we plan to skip XLogAcceptWrites() work only and\n> let the StartupXLOG() do the rest of the work as it is where it will\n> going to update ControlFile's DBState to DB_IN_PRODUCTION, then we\n> might need some ugly kludge to call PerformRecoveryXLogAction() in\n> checkpointer irrespective of DBState, which makes me a bit\n> uncomfortable.\n\nI think that replacing the if (InRecovery) test with if\n(ControlFile->state != DB_SHUTDOWNED) is probably just plain wrong. I\nmean, there are three separate places where we set InRecovery = true.\nOne of those executes if ControlFile->state != DB_SHUTDOWNED, matching\nwhat you have here, but it also can happen if checkPoint.redo <\nRecPtr, or if read_backup_label is true and ReadCheckpointRecord\nreturns non-NULL. Now maybe you're going to tell me that in those\nother two cases we can't reach here anyway, but I don't see off-hand\nwhy that should be true, and even if it is true, it seems like kind of\na fragile thing to rely on. I think we need to rely on something in\nshared memory that is more explicitly connected to the question of\nwhether we are in recovery.\n\nThe other part of this patch has to do with whether we can use the\nreturn value of GetLastSegSwitchData as a substitute for relying on\nEndOfLog. Now as you have it, you end up creating a local variable\ncalled EndOfLog that shadows another such variable in an outer scope,\nwhich probably would not make anyone who noticed things in such a\nstate very happy. However, that will naturally get fixed if you\nreorder the patches as per above, so let's turn to the central\nquestion: is this a good way of getting EndOfLog? The value that would\nbe in effect at the time this code is executed is set here:\n\n XLogBeginRead(xlogreader, LastRec);\n record = ReadRecord(xlogreader, PANIC, false);\n EndOfLog = EndRecPtr;\n\nSubsequently we do this:\n\n /* start the archive_timeout timer and LSN running */\n XLogCtl->lastSegSwitchTime = (pg_time_t) time(NULL);\n XLogCtl->lastSegSwitchLSN = EndOfLog;\n\nSo at that point the value that GetLastSegSwitchData() would return\nhas to match what's in the existing variable. But later XLogWrite()\nwill change the value. So the question boils down to whether\nXLogWrite() could have been called between the assignment just above\nand when this code runs. And the answer seems to pretty clear be yes,\nbecause just above this code, we might have done\nCreateEndOfRecoveryRecord() or RequestCheckpoint(), and just above\nthat, we did UpdateFullPageWrites(). So I don't think this is right.\n\n> > (3) CheckpointStats, which is called from RemoveXlogFile which is\n> > called from RemoveNonParentXlogFiles which is called from\n> > CleanupAfterArchiveRecovery which is called from XLogAcceptWrites.\n> > This last one is actually pretty weird already in the existing code.\n> > It sort of looks like RemoveXlogFile() only expects to be called from\n> > the checkpointer (or a standalone backend) so that it can update\n> > CheckpointStats and have that just work, but actually it's also called\n> > from the startup process when a timeline switch happens. I don't know\n> > whether the fact that the increments to ckpt_segs_recycled get lost in\n> > that case should be considered an intentional behavior that should be\n> > preserved or an inadvertent mistake. >\n>\n> Maybe I could be wrong, but I think that is intentional. It removes\n> pre-allocated or bogus files of the old timeline which are not\n> supposed to be considered in stats. The comments for\n> CheckpointStatsData might not be clear but comment at the calling\n> RemoveNonParentXlogFiles() place inside StartupXLOG hints the same:\n\nSure, I'm not saying the files are being removed by accident. I'm\nsaying it may be accidental that the removals are (I think) not going\nto make it into the stats.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Sep 2021 14:26:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 11:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Sep 20, 2021 at 11:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Ok, understood, I have separated my changes into 0001 and 0002 patch,\n> > and the refactoring patches start from 0003.\n>\n> I think it would be better in the other order, with the refactoring\n> patches at the beginning of the series.\n>\n\nOk, will do that. I did this other way to minimize the diff e.g.\ndeletion diff of RecoveryXlogAction enum and\nDetermineRecoveryXlogAction(), etc.\n\n> > In the 0001 patch, I have copied ArchiveRecoveryRequested to shared\n> > memory as said previously. Coping ArchiveRecoveryRequested value to\n> > shared memory is not really interesting, and I think somehow we should\n> > reuse existing variable, (perhaps, with some modification of the\n> > information it can store, e.g. adding one more enum value for\n> > SharedRecoveryState or something else), thinking on the same.\n> >\n> > In addition to that, I tried to turn down the scope of\n> > ArchiveRecoveryRequested global variable. Now, this is a static\n> > variable, and the scope is limited to xlog.c file like\n> > LocalXLogInsertAllowed and can be accessed through the newly added\n> > function ArchiveRecoveryIsRequested() (like PromoteIsTriggered()). Let\n> > me know what you think about the approach.\n>\n> I'm not sure yet whether I like this or not, but it doesn't seem like\n> a terrible idea. You spelled UNKNOWN wrong, though, which does seem\n> like a terrible idea. :-) \"acccsed\" is not correct either.\n>\n> Also, the new comments for ArchiveRecoveryRequested /\n> ARCHIVE_RECOVERY_REQUEST_* are really not very clear. All you did is\n> substitute the new terminology into the existing comment, but that\n> means that the purpose of the new \"unknown\" value is not at all clear.\n>\n\nOk, will fix those typos and try to improve the comments.\n\n> Consider the following two patch fragments:\n>\n> + * SharedArchiveRecoveryRequested indicates whether an archive recovery is\n> + * requested. Protected by info_lck.\n> ...\n> + * Remember archive recovery request in shared memory state. A lock is not\n> + * needed since we are the only ones who updating this.\n>\n> These two comments directly contradict each other.\n>\n\nOkay, the first comment is not clear enough, I will fix that too. I\nmeant we don't need the lock now since we are the only one updating at\nthis point.\n\n> + SpinLockAcquire(&XLogCtl->info_lck);\n> + XLogCtl->SharedArchiveRecoveryRequested = false;\n> + ArchiveRecoveryRequested = ARCHIVE_RECOVERY_REQUEST_UNKOWN;\n> + SpinLockRelease(&XLogCtl->info_lck);\n>\n> This seems odd to me. In the first place, there doesn't seem to be any\n> value in clearing this -- we're just expending extra CPU cycles to get\n> rid of a value that wouldn't be used anyway. In the second place, if\n> somehow someone checked the value after this point, with this code,\n> they might get the wrong answer, whereas if you just deleted this,\n> they would get the right answer.\n>\n\nPreviously, this flag was only valid in the startup process. But now\nit will be valid for all the processes and will stay until the whole\nserver gets restarted. I don't want anybody to use this flag after the\ncleanup point and just be sure I am explicitly cleaning this.\n\nBy the way, I also don't expect we should go with this approach. I\nproposed this by referring to the PromoteIsTriggered() implementation,\nbut IMO, it is better to have something else since we just want to perform\narchive cleanup operation, and most of the work related to archive\nrecovery was done inside the StartupXLOG().\n\nRather than the proposed design, I was thinking of adding one or two\nmore RecoveryState enums. And while skipping XLogAcceptsWrite() set\nXLogCtl->SharedRecoveryState appropriately, so that we can easily\nidentify that the archive recovery was requested previously and now,\nwe need to perform its pending cleanup operation. Thoughts?\n\n> > In 0002 patch is a mixed one where I tried to remove the dependencies\n> > on global variables and local variables belonging to StartupXLOG(). I\n> > am still worried about the InRecovery value that needs to be deduced\n> > afterward inside XLogAcceptWrites(). Currently, relying on\n> > ControlFile->state != DB_SHUTDOWNED check but I think that will not be\n> > good for ASRO where we plan to skip XLogAcceptWrites() work only and\n> > let the StartupXLOG() do the rest of the work as it is where it will\n> > going to update ControlFile's DBState to DB_IN_PRODUCTION, then we\n> > might need some ugly kludge to call PerformRecoveryXLogAction() in\n> > checkpointer irrespective of DBState, which makes me a bit\n> > uncomfortable.\n>\n> I think that replacing the if (InRecovery) test with if\n> (ControlFile->state != DB_SHUTDOWNED) is probably just plain wrong. I\n> mean, there are three separate places where we set InRecovery = true.\n> One of those executes if ControlFile->state != DB_SHUTDOWNED, matching\n> what you have here, but it also can happen if checkPoint.redo <\n> RecPtr, or if read_backup_label is true and ReadCheckpointRecord\n> returns non-NULL. Now maybe you're going to tell me that in those\n> other two cases we can't reach here anyway, but I don't see off-hand\n> why that should be true, and even if it is true, it seems like kind of\n> a fragile thing to rely on. I think we need to rely on something in\n> shared memory that is more explicitly connected to the question of\n> whether we are in recovery.\n>\n\nNo, this is the other way. I haven't picked (ControlFile->state !=\nDB_SHUTDOWNED) condition because it setting InRecovery, rather, I\npicked because InRecovery flag is setting ControlFile->state to either\nDB_IN_ARCHIVE_RECOVERY or DB_IN_CRASH_RECOVERY, see next if\n(InRecovery) block after InRecovery flag gets set. It is certain that\nwhen the system will be InRecovery, it will have the DBState other\nthan DB_SHUTDOWNED. But that isn't a clean approach for me because\nwhen it will be in WAL prohibited the DBState will be DB_IN_PRODUCTION\nwhich will not work, as I mentioned previously.\n\nI am too thinking about passing this information via shared memory but\ntrying to somehow avoid this, lets see.\n\n> The other part of this patch has to do with whether we can use the\n> return value of GetLastSegSwitchData as a substitute for relying on\n> EndOfLog. Now as you have it, you end up creating a local variable\n> called EndOfLog that shadows another such variable in an outer scope,\n> which probably would not make anyone who noticed things in such a\n> state very happy. However, that will naturally get fixed if you\n> reorder the patches as per above, so let's turn to the central\n> question: is this a good way of getting EndOfLog? The value that would\n> be in effect at the time this code is executed is set here:\n>\n> XLogBeginRead(xlogreader, LastRec);\n> record = ReadRecord(xlogreader, PANIC, false);\n> EndOfLog = EndRecPtr;\n>\n> Subsequently we do this:\n>\n> /* start the archive_timeout timer and LSN running */\n> XLogCtl->lastSegSwitchTime = (pg_time_t) time(NULL);\n> XLogCtl->lastSegSwitchLSN = EndOfLog;\n>\n> So at that point the value that GetLastSegSwitchData() would return\n> has to match what's in the existing variable. But later XLogWrite()\n> will change the value. So the question boils down to whether\n> XLogWrite() could have been called between the assignment just above\n> and when this code runs. And the answer seems to pretty clear be yes,\n> because just above this code, we might have done\n> CreateEndOfRecoveryRecord() or RequestCheckpoint(), and just above\n> that, we did UpdateFullPageWrites(). So I don't think this is right.\n>\n\nYou are correct, if XLogWrite() called between the lastSegSwitchLSN\nvalue can be changed, but the question is, will that change in our\ncase. I think it won't, let me explain.\n\nIIUC, lastSegSwitchLSN will change generally in XLogWrite(), if the\nprevious WAL has been filled up. But if we see closely what will be\ngoing to be written before we do check lastSegSwitchLSN. Currently, we\nhave a record for full-page write and record for either recovery end\nor checkpoint, all these are fixed size and I don't think going to\nfill the whole 16MB wal file. Correct me if I am missing something.\n\n> > > (3) CheckpointStats, which is called from RemoveXlogFile which is\n> > > called from RemoveNonParentXlogFiles which is called from\n> > > CleanupAfterArchiveRecovery which is called from XLogAcceptWrites.\n> > > This last one is actually pretty weird already in the existing code.\n> > > It sort of looks like RemoveXlogFile() only expects to be called from\n> > > the checkpointer (or a standalone backend) so that it can update\n> > > CheckpointStats and have that just work, but actually it's also called\n> > > from the startup process when a timeline switch happens. I don't know\n> > > whether the fact that the increments to ckpt_segs_recycled get lost in\n> > > that case should be considered an intentional behavior that should be\n> > > preserved or an inadvertent mistake. >\n> >\n> > Maybe I could be wrong, but I think that is intentional. It removes\n> > pre-allocated or bogus files of the old timeline which are not\n> > supposed to be considered in stats. The comments for\n> > CheckpointStatsData might not be clear but comment at the calling\n> > RemoveNonParentXlogFiles() place inside StartupXLOG hints the same:\n>\n> Sure, I'm not saying the files are being removed by accident. I'm\n> saying it may be accidental that the removals are (I think) not going\n> to make it into the stats.\n>\n\nUnderstood, it looks like I missed the concluding line in the previous\nreply. My point was if deleting bogus files then why should we care\nabout counting them in stats.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 24 Sep 2021 17:07:42 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 5:07 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Sep 23, 2021 at 11:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Sep 20, 2021 at 11:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > Ok, understood, I have separated my changes into 0001 and 0002 patch,\n> > > and the refactoring patches start from 0003.\n> >\n> > I think it would be better in the other order, with the refactoring\n> > patches at the beginning of the series.\n> >\n>\n> Ok, will do that. I did this other way to minimize the diff e.g.\n> deletion diff of RecoveryXlogAction enum and\n> DetermineRecoveryXlogAction(), etc.\n>\n\nI have reversed the patch order. Now refactoring patches will be\nfirst, and the patch that removes the dependencies on global & local\nvariables will be the last. I did the necessary modification in the\nrefactoring patches too e.g. removed DetermineRecoveryXlogAction() and\nRecoveryXlogAction enum which is no longer needed (thanks to commit #\n1d919de5eb3fffa7cc9479ed6d2915fb89794459 to make code simple).\n\nTo find the value of InRecovery after we clear it, patch still uses\nControlFile's DBState, but now the check condition changed to a more\nspecific one which is less confusing.\n\nIn casual off-list discussion, the point was made to check\nSharedRecoveryState to find out the InRecovery value afterward, and\ncheck that using RecoveryInProgress(). But we can't depend on\nSharedRecoveryState because at the start it gets initialized to\nRECOVERY_STATE_CRASH irrespective of InRecovery that happens later.\nTherefore, we can't use RecoveryInProgress() which always returns\ntrue if SharedRecoveryState != RECOVERY_STATE_DONE.\n\nI am posting only refactoring patches for now.\n\nRegards,\nAmul",
"msg_date": "Thu, 30 Sep 2021 17:28:24 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 7:59 AM Amul Sul <sulamul@gmail.com> wrote:\n> To find the value of InRecovery after we clear it, patch still uses\n> ControlFile's DBState, but now the check condition changed to a more\n> specific one which is less confusing.\n>\n> In casual off-list discussion, the point was made to check\n> SharedRecoveryState to find out the InRecovery value afterward, and\n> check that using RecoveryInProgress(). But we can't depend on\n> SharedRecoveryState because at the start it gets initialized to\n> RECOVERY_STATE_CRASH irrespective of InRecovery that happens later.\n> Therefore, we can't use RecoveryInProgress() which always returns\n> true if SharedRecoveryState != RECOVERY_STATE_DONE.\n\nUh, this change has crept into 0002, but it should be in 0004 with the\nrest of the changes to remove dependencies on variables specific to\nthe startup process. Like I said before, we should really be trying to\nseparate code movement from functional changes. Also, 0002 doesn't\nactually apply for me. Did you generate these patches with 'git\nformat-patch'?\n\n[rhaas pgsql]$ patch -p1 <\n~/Downloads/v36-0001-Refactor-some-end-of-recovery-code-out-of-Startu.patch\npatching file src/backend/access/transam/xlog.c\nHunk #1 succeeded at 889 (offset 9 lines).\nHunk #2 succeeded at 939 (offset 12 lines).\nHunk #3 succeeded at 5734 (offset 37 lines).\nHunk #4 succeeded at 8038 (offset 70 lines).\nHunk #5 succeeded at 8248 (offset 70 lines).\n[rhaas pgsql]$ patch -p1 <\n~/Downloads/v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\npatching file src/backend/access/transam/xlog.c\nReversed (or previously applied) patch detected! Assume -R? [n]\nApply anyway? [n] y\nHunk #1 FAILED at 7954.\nHunk #2 succeeded at 8079 (offset 70 lines).\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/access/transam/xlog.c.rej\n[rhaas pgsql]$ git reset --hard\nHEAD is now at b484ddf4d2 Treat ETIMEDOUT as indicating a\nnon-recoverable connection failure.\n[rhaas pgsql]$ patch -p1 <\n~/Downloads/v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\npatching file src/backend/access/transam/xlog.c\nReversed (or previously applied) patch detected! Assume -R? [n]\nApply anyway? [n]\nSkipping patch.\n2 out of 2 hunks ignored -- saving rejects to file\nsrc/backend/access/transam/xlog.c.rej\n\nIt seems to me that the approach you're pursuing here can't work,\nbecause the long-term goal is to get to a place where, if the system\nstarts up read-only, XLogAcceptWrites() might not be called until\nlater, after StartupXLOG() has exited. But in that case the control\nfile state would be DB_IN_PRODUCTION. But my idea of using\nRecoveryInProgress() won't work either, because we set\nRECOVERY_STATE_DONE just after we set DB_IN_PRODUCTION. Put\ndifferently, the question we want to answer is not \"are we in recovery\nnow?\" but \"did we perform recovery?\". After studying the code a bit, I\nthink a good test might be\n!XLogRecPtrIsInvalid(XLogCtl->lastReplayedEndRecPtr). If InRecovery\ngets set to true, then we're certain to enter the if (InRecovery)\nblock that contains the main redo loop. And that block unconditionally\ndoes XLogCtl->lastReplayedEndRecPtr = XLogCtl->replayEndRecPtr. I\nthink that replayEndRecPtr can't be 0 because it's supposed to\nrepresent the record we're pretending to have last replayed, as\nexplained by the comments. And while lastReplayedEndRecPtr will get\nupdated later as we replay more records, I think it will never be set\nback to 0. It's only going to increase, as we replay more records. On\nthe other hand if InRecovery = false then we'll never change it, and\nit seems that it starts out as 0.\n\nI was hoping to have more time today to comment on 0004, but the day\nseems to have gotten away from me. One quick thought is that it looks\na bit strange to be getting EndOfLog from GetLastSegSwitchData() which\nreturns lastSegSwitchLSN while getting EndOfLogTLI from replayEndTLI\n... because there's also replayEndRecPtr, which seems to go with\nreplayEndTLI. It feels like we should use a source for the TLI that\nclearly matches the source for the corresponding LSN, unless there's\nsome super-good reason to do otherwise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Sep 2021 16:59:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 2:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Sep 30, 2021 at 7:59 AM Amul Sul <sulamul@gmail.com> wrote:\n> > To find the value of InRecovery after we clear it, patch still uses\n> > ControlFile's DBState, but now the check condition changed to a more\n> > specific one which is less confusing.\n> >\n> > In casual off-list discussion, the point was made to check\n> > SharedRecoveryState to find out the InRecovery value afterward, and\n> > check that using RecoveryInProgress(). But we can't depend on\n> > SharedRecoveryState because at the start it gets initialized to\n> > RECOVERY_STATE_CRASH irrespective of InRecovery that happens later.\n> > Therefore, we can't use RecoveryInProgress() which always returns\n> > true if SharedRecoveryState != RECOVERY_STATE_DONE.\n>\n> Uh, this change has crept into 0002, but it should be in 0004 with the\n> rest of the changes to remove dependencies on variables specific to\n> the startup process. Like I said before, we should really be trying to\n> separate code movement from functional changes. Also, 0002 doesn't\n> actually apply for me. Did you generate these patches with 'git\n> format-patch'?\n>\n> [rhaas pgsql]$ patch -p1 <\n> ~/Downloads/v36-0001-Refactor-some-end-of-recovery-code-out-of-Startu.patch\n> patching file src/backend/access/transam/xlog.c\n> Hunk #1 succeeded at 889 (offset 9 lines).\n> Hunk #2 succeeded at 939 (offset 12 lines).\n> Hunk #3 succeeded at 5734 (offset 37 lines).\n> Hunk #4 succeeded at 8038 (offset 70 lines).\n> Hunk #5 succeeded at 8248 (offset 70 lines).\n> [rhaas pgsql]$ patch -p1 <\n> ~/Downloads/v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\n> patching file src/backend/access/transam/xlog.c\n> Reversed (or previously applied) patch detected! Assume -R? [n]\n> Apply anyway? [n] y\n> Hunk #1 FAILED at 7954.\n> Hunk #2 succeeded at 8079 (offset 70 lines).\n> 1 out of 2 hunks FAILED -- saving rejects to file\n> src/backend/access/transam/xlog.c.rej\n> [rhaas pgsql]$ git reset --hard\n> HEAD is now at b484ddf4d2 Treat ETIMEDOUT as indicating a\n> non-recoverable connection failure.\n> [rhaas pgsql]$ patch -p1 <\n> ~/Downloads/v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\n> patching file src/backend/access/transam/xlog.c\n> Reversed (or previously applied) patch detected! Assume -R? [n]\n> Apply anyway? [n]\n> Skipping patch.\n> 2 out of 2 hunks ignored -- saving rejects to file\n> src/backend/access/transam/xlog.c.rej\n>\n>\nI tried to apply the patch on the master branch head and it's failing\nwith conflicts.\n\nLater applied patch on below commit and it got applied cleanly:\n\ncommit 7d1aa6bf1c27bf7438179db446f7d1e72ae093d0\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Sep 27 18:48:01 2021 -0400\n\n Re-enable contrib/bloom's TAP tests.\n\nrushabh@rushabh:postgresql$ git apply\nv36-0001-Refactor-some-end-of-recovery-code-out-of-Startu.patch\nrushabh@rushabh:postgresql$ git apply\nv36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\nrushabh@rushabh:postgresql$ git apply\nv36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch\nv36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:34: space\nbefore tab in indent.\n /*\nv36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:38: space\nbefore tab in indent.\n */\nv36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:39: space\nbefore tab in indent.\n Insert->fullPageWrites = lastFullPageWrites;\nwarning: 3 lines add whitespace errors.\nrushabh@rushabh:postgresql$ git apply\nv36-0004-Remove-dependencies-on-startup-process-specifica.patch\n\nThere are whitespace errors on patch 0003.\n\n\n> It seems to me that the approach you're pursuing here can't work,\n> because the long-term goal is to get to a place where, if the system\n> starts up read-only, XLogAcceptWrites() might not be called until\n> later, after StartupXLOG() has exited. But in that case the control\n> file state would be DB_IN_PRODUCTION. But my idea of using\n> RecoveryInProgress() won't work either, because we set\n> RECOVERY_STATE_DONE just after we set DB_IN_PRODUCTION. Put\n> differently, the question we want to answer is not \"are we in recovery\n> now?\" but \"did we perform recovery?\". After studying the code a bit, I\n> think a good test might be\n> !XLogRecPtrIsInvalid(XLogCtl->lastReplayedEndRecPtr). If InRecovery\n> gets set to true, then we're certain to enter the if (InRecovery)\n> block that contains the main redo loop. And that block unconditionally\n> does XLogCtl->lastReplayedEndRecPtr = XLogCtl->replayEndRecPtr. I\n> think that replayEndRecPtr can't be 0 because it's supposed to\n> represent the record we're pretending to have last replayed, as\n> explained by the comments. And while lastReplayedEndRecPtr will get\n> updated later as we replay more records, I think it will never be set\n> back to 0. It's only going to increase, as we replay more records. On\n> the other hand if InRecovery = false then we'll never change it, and\n> it seems that it starts out as 0.\n>\n> I was hoping to have more time today to comment on 0004, but the day\n> seems to have gotten away from me. One quick thought is that it looks\n> a bit strange to be getting EndOfLog from GetLastSegSwitchData() which\n> returns lastSegSwitchLSN while getting EndOfLogTLI from replayEndTLI\n> ... because there's also replayEndRecPtr, which seems to go with\n> replayEndTLI. It feels like we should use a source for the TLI that\n> clearly matches the source for the corresponding LSN, unless there's\n> some super-good reason to do otherwise.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \nRushabh Lathia\n\nOn Fri, Oct 1, 2021 at 2:29 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Sep 30, 2021 at 7:59 AM Amul Sul <sulamul@gmail.com> wrote:\n> To find the value of InRecovery after we clear it, patch still uses\n> ControlFile's DBState, but now the check condition changed to a more\n> specific one which is less confusing.\n>\n> In casual off-list discussion, the point was made to check\n> SharedRecoveryState to find out the InRecovery value afterward, and\n> check that using RecoveryInProgress(). But we can't depend on\n> SharedRecoveryState because at the start it gets initialized to\n> RECOVERY_STATE_CRASH irrespective of InRecovery that happens later.\n> Therefore, we can't use RecoveryInProgress() which always returns\n> true if SharedRecoveryState != RECOVERY_STATE_DONE.\n\nUh, this change has crept into 0002, but it should be in 0004 with the\nrest of the changes to remove dependencies on variables specific to\nthe startup process. Like I said before, we should really be trying to\nseparate code movement from functional changes. Also, 0002 doesn't\nactually apply for me. Did you generate these patches with 'git\nformat-patch'?\n\n[rhaas pgsql]$ patch -p1 <\n~/Downloads/v36-0001-Refactor-some-end-of-recovery-code-out-of-Startu.patch\npatching file src/backend/access/transam/xlog.c\nHunk #1 succeeded at 889 (offset 9 lines).\nHunk #2 succeeded at 939 (offset 12 lines).\nHunk #3 succeeded at 5734 (offset 37 lines).\nHunk #4 succeeded at 8038 (offset 70 lines).\nHunk #5 succeeded at 8248 (offset 70 lines).\n[rhaas pgsql]$ patch -p1 <\n~/Downloads/v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\npatching file src/backend/access/transam/xlog.c\nReversed (or previously applied) patch detected! Assume -R? [n]\nApply anyway? [n] y\nHunk #1 FAILED at 7954.\nHunk #2 succeeded at 8079 (offset 70 lines).\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/access/transam/xlog.c.rej\n[rhaas pgsql]$ git reset --hard\nHEAD is now at b484ddf4d2 Treat ETIMEDOUT as indicating a\nnon-recoverable connection failure.\n[rhaas pgsql]$ patch -p1 <\n~/Downloads/v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\npatching file src/backend/access/transam/xlog.c\nReversed (or previously applied) patch detected! Assume -R? [n]\nApply anyway? [n]\nSkipping patch.\n2 out of 2 hunks ignored -- saving rejects to file\nsrc/backend/access/transam/xlog.c.rej\nI tried to apply the patch on the master branch head and it's failingwith conflicts.Later applied patch on below commit and it got applied cleanly:commit 7d1aa6bf1c27bf7438179db446f7d1e72ae093d0Author: Tom Lane <tgl@sss.pgh.pa.us>Date: Mon Sep 27 18:48:01 2021 -0400 Re-enable contrib/bloom's TAP tests. rushabh@rushabh:postgresql$ git apply v36-0001-Refactor-some-end-of-recovery-code-out-of-Startu.patchrushabh@rushabh:postgresql$ git apply v36-0002-Postpone-some-end-of-recovery-operations-relatin.patchrushabh@rushabh:postgresql$ git apply v36-0003-Create-XLogAcceptWrites-function-with-code-from-.patchv36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:34: space before tab in indent. \t/*v36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:38: space before tab in indent. \t */v36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:39: space before tab in indent. \tInsert->fullPageWrites = lastFullPageWrites;warning: 3 lines add whitespace errors.rushabh@rushabh:postgresql$ git apply v36-0004-Remove-dependencies-on-startup-process-specifica.patch There are whitespace errors on patch 0003. \nIt seems to me that the approach you're pursuing here can't work,\nbecause the long-term goal is to get to a place where, if the system\nstarts up read-only, XLogAcceptWrites() might not be called until\nlater, after StartupXLOG() has exited. But in that case the control\nfile state would be DB_IN_PRODUCTION. But my idea of using\nRecoveryInProgress() won't work either, because we set\nRECOVERY_STATE_DONE just after we set DB_IN_PRODUCTION. Put\ndifferently, the question we want to answer is not \"are we in recovery\nnow?\" but \"did we perform recovery?\". After studying the code a bit, I\nthink a good test might be\n!XLogRecPtrIsInvalid(XLogCtl->lastReplayedEndRecPtr). If InRecovery\ngets set to true, then we're certain to enter the if (InRecovery)\nblock that contains the main redo loop. And that block unconditionally\ndoes XLogCtl->lastReplayedEndRecPtr = XLogCtl->replayEndRecPtr. I\nthink that replayEndRecPtr can't be 0 because it's supposed to\nrepresent the record we're pretending to have last replayed, as\nexplained by the comments. And while lastReplayedEndRecPtr will get\nupdated later as we replay more records, I think it will never be set\nback to 0. It's only going to increase, as we replay more records. On\nthe other hand if InRecovery = false then we'll never change it, and\nit seems that it starts out as 0.\n\nI was hoping to have more time today to comment on 0004, but the day\nseems to have gotten away from me. One quick thought is that it looks\na bit strange to be getting EndOfLog from GetLastSegSwitchData() which\nreturns lastSegSwitchLSN while getting EndOfLogTLI from replayEndTLI\n... because there's also replayEndRecPtr, which seems to go with\nreplayEndTLI. It feels like we should use a source for the TLI that\nclearly matches the source for the corresponding LSN, unless there's\nsome super-good reason to do otherwise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n-- Rushabh Lathia",
"msg_date": "Mon, 4 Oct 2021 13:57:01 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 1:57 PM Rushabh Lathia\n<rushabh.lathia@gmail.com> wrote:\n>\n>\n>\n> On Fri, Oct 1, 2021 at 2:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Thu, Sep 30, 2021 at 7:59 AM Amul Sul <sulamul@gmail.com> wrote:\n>> > To find the value of InRecovery after we clear it, patch still uses\n>> > ControlFile's DBState, but now the check condition changed to a more\n>> > specific one which is less confusing.\n>> >\n>> > In casual off-list discussion, the point was made to check\n>> > SharedRecoveryState to find out the InRecovery value afterward, and\n>> > check that using RecoveryInProgress(). But we can't depend on\n>> > SharedRecoveryState because at the start it gets initialized to\n>> > RECOVERY_STATE_CRASH irrespective of InRecovery that happens later.\n>> > Therefore, we can't use RecoveryInProgress() which always returns\n>> > true if SharedRecoveryState != RECOVERY_STATE_DONE.\n>>\n>> Uh, this change has crept into 0002, but it should be in 0004 with the\n>> rest of the changes to remove dependencies on variables specific to\n>> the startup process. Like I said before, we should really be trying to\n>> separate code movement from functional changes.\n\nWell, I have to replace the InRecovery flag in that patch since we are\nmoving code past to the point where the InRecovery flag gets cleared.\nIf I don't do, then the 0002 patch would be wrong since InRecovery is\nalways false, and behaviour won't be the same as it was before that\npatch.\n\n>> Also, 0002 doesn't\n>> actually apply for me. Did you generate these patches with 'git\n>> format-patch'?\n>>\n>> [rhaas pgsql]$ patch -p1 <\n>> ~/Downloads/v36-0001-Refactor-some-end-of-recovery-code-out-of-Startu.patch\n>> patching file src/backend/access/transam/xlog.c\n>> Hunk #1 succeeded at 889 (offset 9 lines).\n>> Hunk #2 succeeded at 939 (offset 12 lines).\n>> Hunk #3 succeeded at 5734 (offset 37 lines).\n>> Hunk #4 succeeded at 8038 (offset 70 lines).\n>> Hunk #5 succeeded at 8248 (offset 70 lines).\n>> [rhaas pgsql]$ patch -p1 <\n>> ~/Downloads/v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\n>> patching file src/backend/access/transam/xlog.c\n>> Reversed (or previously applied) patch detected! Assume -R? [n]\n>> Apply anyway? [n] y\n>> Hunk #1 FAILED at 7954.\n>> Hunk #2 succeeded at 8079 (offset 70 lines).\n>> 1 out of 2 hunks FAILED -- saving rejects to file\n>> src/backend/access/transam/xlog.c.rej\n>> [rhaas pgsql]$ git reset --hard\n>> HEAD is now at b484ddf4d2 Treat ETIMEDOUT as indicating a\n>> non-recoverable connection failure.\n>> [rhaas pgsql]$ patch -p1 <\n>> ~/Downloads/v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\n>> patching file src/backend/access/transam/xlog.c\n>> Reversed (or previously applied) patch detected! Assume -R? [n]\n>> Apply anyway? [n]\n>> Skipping patch.\n>> 2 out of 2 hunks ignored -- saving rejects to file\n>> src/backend/access/transam/xlog.c.rej\n>>\n>\n> I tried to apply the patch on the master branch head and it's failing\n> with conflicts.\n>\n\nThanks, Rushabh, for the quick check, I have attached a rebased version for the\nlatest master head commit # f6b5d05ba9a.\n\n> Later applied patch on below commit and it got applied cleanly:\n>\n> commit 7d1aa6bf1c27bf7438179db446f7d1e72ae093d0\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Mon Sep 27 18:48:01 2021 -0400\n>\n> Re-enable contrib/bloom's TAP tests.\n>\n> rushabh@rushabh:postgresql$ git apply v36-0001-Refactor-some-end-of-recovery-code-out-of-Startu.patch\n> rushabh@rushabh:postgresql$ git apply v36-0002-Postpone-some-end-of-recovery-operations-relatin.patch\n> rushabh@rushabh:postgresql$ git apply v36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch\n> v36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:34: space before tab in indent.\n> /*\n> v36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:38: space before tab in indent.\n> */\n> v36-0003-Create-XLogAcceptWrites-function-with-code-from-.patch:39: space before tab in indent.\n> Insert->fullPageWrites = lastFullPageWrites;\n> warning: 3 lines add whitespace errors.\n> rushabh@rushabh:postgresql$ git apply v36-0004-Remove-dependencies-on-startup-process-specifica.patch\n>\n> There are whitespace errors on patch 0003.\n>\n\nFixed.\n\n>>\n>> It seems to me that the approach you're pursuing here can't work,\n>> because the long-term goal is to get to a place where, if the system\n>> starts up read-only, XLogAcceptWrites() might not be called until\n>> later, after StartupXLOG() has exited. But in that case the control\n>> file state would be DB_IN_PRODUCTION. But my idea of using\n>> RecoveryInProgress() won't work either, because we set\n>> RECOVERY_STATE_DONE just after we set DB_IN_PRODUCTION. Put\n>> differently, the question we want to answer is not \"are we in recovery\n>> now?\" but \"did we perform recovery?\". After studying the code a bit, I\n>> think a good test might be\n>> !XLogRecPtrIsInvalid(XLogCtl->lastReplayedEndRecPtr). If InRecovery\n>> gets set to true, then we're certain to enter the if (InRecovery)\n>> block that contains the main redo loop. And that block unconditionally\n>> does XLogCtl->lastReplayedEndRecPtr = XLogCtl->replayEndRecPtr. I\n>> think that replayEndRecPtr can't be 0 because it's supposed to\n>> represent the record we're pretending to have last replayed, as\n>> explained by the comments. And while lastReplayedEndRecPtr will get\n>> updated later as we replay more records, I think it will never be set\n>> back to 0. It's only going to increase, as we replay more records. On\n>> the other hand if InRecovery = false then we'll never change it, and\n>> it seems that it starts out as 0.\n>>\n\nUnderstood, used lastReplayedEndRecPtr but in 0002 patch for the\naforesaid reason.\n\n>> I was hoping to have more time today to comment on 0004, but the day\n>> seems to have gotten away from me. One quick thought is that it looks\n>> a bit strange to be getting EndOfLog from GetLastSegSwitchData() which\n>> returns lastSegSwitchLSN while getting EndOfLogTLI from replayEndTLI\n>> ... because there's also replayEndRecPtr, which seems to go with\n>> replayEndTLI. It feels like we should use a source for the TLI that\n>> clearly matches the source for the corresponding LSN, unless there's\n>> some super-good reason to do otherwise.\n\nAgreed, that would be the right thing, but on the latest master head\nthat might not be the right thing to use because of commit #\nff9f111bce24 that has introduced the following code that changes the\nEndOfLog that could be different from replayEndRecPtr:\n\n /*\n * Actually, if WAL ended in an incomplete record, skip the parts that\n * made it through and start writing after the portion that persisted.\n * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n * we'll do as soon as we're open for writing new WAL.)\n */\n if (!XLogRecPtrIsInvalid(missingContrecPtr))\n {\n Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n EndOfLog = missingContrecPtr;\n }\n\nWith this commit, we have got two new global variables. First,\nmissingContrecPtr is an EndOfLog which gets stored in shared memory at\nfew places, and the other one abortedRecPtr that is needed in\nXLogAcceptWrite(), which I have exported into shared memory.\n\nRegards,\nAmul",
"msg_date": "Tue, 5 Oct 2021 16:11:58 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Oct 05, 2021 at 04:11:58PM +0530, Amul Sul wrote:\n> On Mon, Oct 4, 2021 at 1:57 PM Rushabh Lathia\n> <rushabh.lathia@gmail.com> wrote:\n> >\n> > I tried to apply the patch on the master branch head and it's failing\n> > with conflicts.\n> >\n> \n> Thanks, Rushabh, for the quick check, I have attached a rebased version for the\n> latest master head commit # f6b5d05ba9a.\n> \n\nHi,\n\nI got this error while executing \"make check\" on src/test/recovery:\n\n\"\"\"\nt/026_overwrite_contrecord.pl ........ 1/3 # poll_query_until timed out executing this query:\n# SELECT '0/201D4D8'::pg_lsn <= pg_last_wal_replay_lsn()\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\n# Looks like your test exited with 29 just after 1.\nt/026_overwrite_contrecord.pl ........ Dubious, test returned 29 (wstat 7424, 0x1d00)\nFailed 2/3 subtests \n\nTest Summary Report\n-------------------\nt/026_overwrite_contrecord.pl (Wstat: 7424 Tests: 1 Failed: 0)\n Non-zero exit status: 29\n Parse errors: Bad plan. You planned 3 tests but ran 1.\nFiles=26, Tests=279, 400 wallclock secs ( 0.27 usr 0.10 sys + 73.78 cusr 59.66 csys = 133.81 CPU)\nResult: FAIL\nmake: *** [Makefile:23: check] Error 1\n\"\"\"\n\n\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 6 Oct 2021 19:26:51 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Oct 7, 2021 at 5:56 AM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Tue, Oct 05, 2021 at 04:11:58PM +0530, Amul Sul wrote:\n> > On Mon, Oct 4, 2021 at 1:57 PM Rushabh Lathia\n> > <rushabh.lathia@gmail.com> wrote:\n> > >\n> > > I tried to apply the patch on the master branch head and it's failing\n> > > with conflicts.\n> > >\n> >\n> > Thanks, Rushabh, for the quick check, I have attached a rebased version for the\n> > latest master head commit # f6b5d05ba9a.\n> >\n>\n> Hi,\n>\n> I got this error while executing \"make check\" on src/test/recovery:\n>\n> \"\"\"\n> t/026_overwrite_contrecord.pl ........ 1/3 # poll_query_until timed out executing this query:\n> # SELECT '0/201D4D8'::pg_lsn <= pg_last_wal_replay_lsn()\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n> # with stderr:\n> # Looks like your test exited with 29 just after 1.\n> t/026_overwrite_contrecord.pl ........ Dubious, test returned 29 (wstat 7424, 0x1d00)\n> Failed 2/3 subtests\n>\n> Test Summary Report\n> -------------------\n> t/026_overwrite_contrecord.pl (Wstat: 7424 Tests: 1 Failed: 0)\n> Non-zero exit status: 29\n> Parse errors: Bad plan. You planned 3 tests but ran 1.\n> Files=26, Tests=279, 400 wallclock secs ( 0.27 usr 0.10 sys + 73.78 cusr 59.66 csys = 133.81 CPU)\n> Result: FAIL\n> make: *** [Makefile:23: check] Error 1\n> \"\"\"\n>\n\nThanks for the reporting problem, I am working on it. The cause of\nfailure is that v37_0004 patch clearing the missingContrecPtr global\nvariable before CreateOverwriteContrecordRecord() execution, which it\nshouldn't.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 7 Oct 2021 18:21:44 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Oct 7, 2021 at 6:21 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Oct 7, 2021 at 5:56 AM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> >\n> > On Tue, Oct 05, 2021 at 04:11:58PM +0530, Amul Sul wrote:\n> > > On Mon, Oct 4, 2021 at 1:57 PM Rushabh Lathia\n> > > <rushabh.lathia@gmail.com> wrote:\n> > > >\n> > > > I tried to apply the patch on the master branch head and it's failing\n> > > > with conflicts.\n> > > >\n> > >\n> > > Thanks, Rushabh, for the quick check, I have attached a rebased version for the\n> > > latest master head commit # f6b5d05ba9a.\n> > >\n> >\n> > Hi,\n> >\n> > I got this error while executing \"make check\" on src/test/recovery:\n> >\n> > \"\"\"\n> > t/026_overwrite_contrecord.pl ........ 1/3 # poll_query_until timed out executing this query:\n> > # SELECT '0/201D4D8'::pg_lsn <= pg_last_wal_replay_lsn()\n> > # expecting this output:\n> > # t\n> > # last actual query output:\n> > # f\n> > # with stderr:\n> > # Looks like your test exited with 29 just after 1.\n> > t/026_overwrite_contrecord.pl ........ Dubious, test returned 29 (wstat 7424, 0x1d00)\n> > Failed 2/3 subtests\n> >\n> > Test Summary Report\n> > -------------------\n> > t/026_overwrite_contrecord.pl (Wstat: 7424 Tests: 1 Failed: 0)\n> > Non-zero exit status: 29\n> > Parse errors: Bad plan. You planned 3 tests but ran 1.\n> > Files=26, Tests=279, 400 wallclock secs ( 0.27 usr 0.10 sys + 73.78 cusr 59.66 csys = 133.81 CPU)\n> > Result: FAIL\n> > make: *** [Makefile:23: check] Error 1\n> > \"\"\"\n> >\n>\n> Thanks for the reporting problem, I am working on it. The cause of\n> failure is that v37_0004 patch clearing the missingContrecPtr global\n> variable before CreateOverwriteContrecordRecord() execution, which it\n> shouldn't.\n>\n\nIn the attached version I have fixed this issue by restoring missingContrecPtr.\n\nTo handle abortedRecPtr and missingContrecPtr newly added global\nvariables thought the commit # ff9f111bce24, we don't need to store\nthem in the shared memory separately, instead, we need a flag that\nindicates a broken record found previously, at the end of recovery, so\nthat we can overwrite contrecord.\n\nThe missingContrecPtr is assigned to the EndOfLog, and we have handled\nEndOfLog previously in the 0004 patch, and the abortedRecPtr is the\nsame as the lastReplayedEndRecPtr, AFAICS. I have added an assert to\nensure that the lastReplayedEndRecPtr value is the same as the\nabortedRecPtr, but I think that is not needed, we can go ahead and\nwrite an overwrite-contrecord starting at lastReplayedEndRecPtr.\n\nRegards,\nAmul",
"msg_date": "Tue, 12 Oct 2021 17:47:22 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Oct 12, 2021 at 8:18 AM Amul Sul <sulamul@gmail.com> wrote:\n> In the attached version I have fixed this issue by restoring missingContrecPtr.\n>\n> To handle abortedRecPtr and missingContrecPtr newly added global\n> variables thought the commit # ff9f111bce24, we don't need to store\n> them in the shared memory separately, instead, we need a flag that\n> indicates a broken record found previously, at the end of recovery, so\n> that we can overwrite contrecord.\n>\n> The missingContrecPtr is assigned to the EndOfLog, and we have handled\n> EndOfLog previously in the 0004 patch, and the abortedRecPtr is the\n> same as the lastReplayedEndRecPtr, AFAICS. I have added an assert to\n> ensure that the lastReplayedEndRecPtr value is the same as the\n> abortedRecPtr, but I think that is not needed, we can go ahead and\n> write an overwrite-contrecord starting at lastReplayedEndRecPtr.\n\nI thought that it made sense to commit 0001 and 0002 at this point, so\nI have done that. I think that the treatment of missingContrecPtr and\nabortedRecPtr may need more thought yet, so at least for that reason I\ndon't think it's a good idea to proceed with 0004 yet. 0003 is just\ncode movement so I guess that can be committed whenever we're\nconfident that we know exactly which things we want to end up inside\nXLogAcceptWrites().\n\nI do have a few ideas after studying this a bit more:\n\n- I wonder whether, in addition to moving a few things later as 0002\ndid, we also ought to think about moving one thing earlier,\nspecifically XLogReportParameters(). Right now, we have, I believe,\nfour things that write WAL at the end of recovery:\nCreateOverwriteContrecordRecord(), UpdateFullPageWrites(),\nPerformRecoveryXLogAction(), and XLogReportParameters(). As the code\nis structured now, we do the first three of those things, and then do\na bunch of other stuff inside CleanupAfterArchiveRecovery() like\nrunning recovery_end_command, and removing non-parent xlog files, and\narchiving the partial segment, and then come back and do the fourth\none. Is there any good reason for that? If not, I think doing them all\ntogether would be cleaner, and would propose to reverse the order of\nCleanupAfterArchiveRecovery() and XLogReportParameters().\n\n- If we did that, then I would further propose to adjust things so\nthat we remove the call to LocalSetXLogInsertAllowed() and the\nassignment LocalXLogInsertAllowed = -1 from inside\nCreateEndOfRecoveryRecord(), the LocalXLogInsertAllowed = -1 from just\nafter UpdateFullPageWrites(), and the call to\nLocalSetXLogInsertAllowed() just before XLogReportParameters().\nInstead, just let the call to LocalSetXLogInsertAllowed() right before\nCreateOverwriteContrecordRecord() remain in effect. There doesn't seem\nto be much point in flipping that switch off and on again, and the\nfact that we have been doing so is in my view just evidence that\nStartupXLOG() doesn't do a very good job of getting related code all\ninto one place.\n\n- It seems really tempting to invent a fourth RecoveryState value that\nindicates that we are done with REDO but not yet in production, and\nmaybe also to rename RecoveryState to something like WALState. I'm\nthinking of something like WAL_STATE_CRASH_RECOVERY,\nWAL_STATE_ARCHIVE_RECOVERY, WAL_STATE_REDO_COMPLETE, and\nWAL_STATE_PRODUCTION. Then, instead of having\nLocalSetXLogInsertAllowed(), we could teach XLogInsertAllowed() that\nthe startup process and the checkpointer are allowed to insert WAL\nwhen the state is WAL_STATE_REDO_COMPLETE, but other processes only\nonce we reach WAL_STATE_PRODUCTION. We would set\nWAL_STATE_REDO_COMPLETE where we now call LocalSetXLogInsertAllowed().\nIt's necessary to include the checkpointer, or at least I think it is,\nbecause PerformRecoveryXLogAction() might call RequestCheckpoint(),\nand that's got to work. If we did this, then I think it would also\nsolve another problem which the overall patch set has to address\nsomehow. Say that we eventually move responsibility for the\nto-be-created XLogAcceptWrites() function from the startup process to\nthe checkpointer, as proposed. The checkpointer needs to know when to\ncall it ... and the answer with this change is simple: when we reach\nWAL_STATE_REDO_COMPLETE, it's time!\n\nBut this idea is not completely problem-free. I spent some time poking\nat it and I think it's a little hard to come up with a satisfying way\nto code XLogInsertAllowed(). Right now that function calls\nRecoveryInProgress(), and if RecoveryInProgress() decides that\nrecovery is no longer in progress, it calls InitXLOGAccess(). However,\nthat presumes that the only reason you'd call RecoveryInProgress() is\nto figure out whether you should write WAL, which I don't think is\nreally true, and it also means that, when the wal state is\nWAL_STATE_REDO_COMPLETE, RecoveryInProgress() would need to return\ntrue in the checkpointer and startup process and false everywhere\nelse, which does not sound like a great idea. It seems fine to say\nthat xlog insertion is allowed in some processes but not others,\nbecause not all processes are necessarily equally privileged, but\nwhether or not we're in recovery is supposed to be something about\nwhich everyone agrees, so answering that question differently in\ndifferent processes doesn't seem nice. XLogInsertAllowed() could be\nrewritten to check the state directly and make its own determination,\nwithout relying on RecoveryInProgress(), and I think that might be the\nright way to go here.\n\nBut that isn't entirely problem-free either, because there's a lot of\ncode that uses RecoveryInProgress() to answer the question \"should I\nwrite WAL?\" and therefore it's not great if RecoveryInProgress() is\nreturning an answer that is inconsistent with XLogInsertAllowed().\nMarkBufferDirtyHint() and heap_page_prune_opt() are examples of this\nkind of coding. It probably wouldn't break in practice right away,\nbecause most of that code never runs in the startup process or the\ncheckpointer and would therefore never notice the difference in\nbehavior between those two functions, but if in the future we get the\nread-only feature that this thread is supposed to be about, we'd have\nproblems. Not all RecoveryInProgress() calls have this sense - e.g.\nsendDir() in basebackup.c is trying to figure out whether recovery\nended during the backup, not whether we can write WAL. But perhaps\nthis is a good time to go and replace RecoveryInProgress() checks that\nare intending to decide whether or not it's OK to write WAL with\nXLogInsertAllowed() checks (noting that the return value is reversed).\nIf we did that, then I think RecoveryInProgress() could also NOT call\nInitXLOGAccess(), and that could be done only by XLogInsertAllowed(),\nwhich seems like it might be better. But I haven't really tried to\ncode all of this up, so I'm not really sure how it all works out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Oct 2021 13:40:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Thu, Oct 14, 2021 at 11:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Oct 12, 2021 at 8:18 AM Amul Sul <sulamul@gmail.com> wrote:\n> > In the attached version I have fixed this issue by restoring missingContrecPtr.\n> >\n> > To handle abortedRecPtr and missingContrecPtr newly added global\n> > variables thought the commit # ff9f111bce24, we don't need to store\n> > them in the shared memory separately, instead, we need a flag that\n> > indicates a broken record found previously, at the end of recovery, so\n> > that we can overwrite contrecord.\n> >\n> > The missingContrecPtr is assigned to the EndOfLog, and we have handled\n> > EndOfLog previously in the 0004 patch, and the abortedRecPtr is the\n> > same as the lastReplayedEndRecPtr, AFAICS. I have added an assert to\n> > ensure that the lastReplayedEndRecPtr value is the same as the\n> > abortedRecPtr, but I think that is not needed, we can go ahead and\n> > write an overwrite-contrecord starting at lastReplayedEndRecPtr.\n>\n> I thought that it made sense to commit 0001 and 0002 at this point, so\n> I have done that. I think that the treatment of missingContrecPtr and\n> abortedRecPtr may need more thought yet, so at least for that reason I\n> don't think it's a good idea to proceed with 0004 yet. 0003 is just\n> code movement so I guess that can be committed whenever we're\n> confident that we know exactly which things we want to end up inside\n> XLogAcceptWrites().\n>\n\nOk.\n\n> I do have a few ideas after studying this a bit more:\n>\n> - I wonder whether, in addition to moving a few things later as 0002\n> did, we also ought to think about moving one thing earlier,\n> specifically XLogReportParameters(). Right now, we have, I believe,\n> four things that write WAL at the end of recovery:\n> CreateOverwriteContrecordRecord(), UpdateFullPageWrites(),\n> PerformRecoveryXLogAction(), and XLogReportParameters(). As the code\n> is structured now, we do the first three of those things, and then do\n> a bunch of other stuff inside CleanupAfterArchiveRecovery() like\n> running recovery_end_command, and removing non-parent xlog files, and\n> archiving the partial segment, and then come back and do the fourth\n> one. Is there any good reason for that? If not, I think doing them all\n> together would be cleaner, and would propose to reverse the order of\n> CleanupAfterArchiveRecovery() and XLogReportParameters().\n>\n\nYes, that can be done.\n\n> - If we did that, then I would further propose to adjust things so\n> that we remove the call to LocalSetXLogInsertAllowed() and the\n> assignment LocalXLogInsertAllowed = -1 from inside\n> CreateEndOfRecoveryRecord(), the LocalXLogInsertAllowed = -1 from just\n> after UpdateFullPageWrites(), and the call to\n> LocalSetXLogInsertAllowed() just before XLogReportParameters().\n> Instead, just let the call to LocalSetXLogInsertAllowed() right before\n> CreateOverwriteContrecordRecord() remain in effect. There doesn't seem\n> to be much point in flipping that switch off and on again, and the\n> fact that we have been doing so is in my view just evidence that\n> StartupXLOG() doesn't do a very good job of getting related code all\n> into one place.\n>\n\nCurrently there are three places that are calling\nLocalSetXLogInsertAllowed() and resetting that LocalXLogInsertAllowed\nflag as StartupXLOG(), CreateEndOfRecoveryRecord() and the\nCreateCheckPoint(). By doing the aforementioned code rearrangement we\ncan get rid of frequent calls from StartupXLOG() and can completely\nremove the need for it in CreateEndOfRecoveryRecord() since that gets\ncalled only from StartupXLOG() directly. Whereas CreateCheckPoint()\ntoo gets called from StartupXLOG() when it is running in a standalone\nbackend only, at that time we don't need to call\nLocalSetXLogInsertAllowed() but if that running in the Checkpointer\nprocess then we need that.\n\nI tried this in the attached version, but I'm a bit skeptical with\nchanges that are needed for CreateCheckPoint(), those don't seem to be\nclean. I am wondering if we could completely remove the need to end of\nrecovery checkpoint as proposed in [1], that would get rid of\nCHECKPOINT_END_OF_RECOVERY operation and the\nLocalSetXLogInsertAllowed() requirement in CreateCheckPoint(), and\nafter that, we were not expecting checkpoint operation in recovery. If\nwe could do that then we would have LocalSetXLogInsertAllowed() only\nat one place i.e. in StartupXLOG (...and in the future in\nXLogAcceptWrites()) -- the code that runs only once in a lifetime of\nthe server and the kludge that the attached patch doing for\nCreateCheckPoint() will not be needed.\n\n> - It seems really tempting to invent a fourth RecoveryState value that\n> indicates that we are done with REDO but not yet in production, and\n> maybe also to rename RecoveryState to something like WALState. I'm\n> thinking of something like WAL_STATE_CRASH_RECOVERY,\n> WAL_STATE_ARCHIVE_RECOVERY, WAL_STATE_REDO_COMPLETE, and\n> WAL_STATE_PRODUCTION. Then, instead of having\n> LocalSetXLogInsertAllowed(), we could teach XLogInsertAllowed() that\n> the startup process and the checkpointer are allowed to insert WAL\n> when the state is WAL_STATE_REDO_COMPLETE, but other processes only\n> once we reach WAL_STATE_PRODUCTION. We would set\n> WAL_STATE_REDO_COMPLETE where we now call LocalSetXLogInsertAllowed().\n> It's necessary to include the checkpointer, or at least I think it is,\n> because PerformRecoveryXLogAction() might call RequestCheckpoint(),\n> and that's got to work. If we did this, then I think it would also\n> solve another problem which the overall patch set has to address\n> somehow. Say that we eventually move responsibility for the\n> to-be-created XLogAcceptWrites() function from the startup process to\n> the checkpointer, as proposed. The checkpointer needs to know when to\n> call it ... and the answer with this change is simple: when we reach\n> WAL_STATE_REDO_COMPLETE, it's time!\n>\n> But this idea is not completely problem-free. I spent some time poking\n> at it and I think it's a little hard to come up with a satisfying way\n> to code XLogInsertAllowed(). Right now that function calls\n> RecoveryInProgress(), and if RecoveryInProgress() decides that\n> recovery is no longer in progress, it calls InitXLOGAccess(). However,\n> that presumes that the only reason you'd call RecoveryInProgress() is\n> to figure out whether you should write WAL, which I don't think is\n> really true, and it also means that, when the wal state is\n> WAL_STATE_REDO_COMPLETE, RecoveryInProgress() would need to return\n> true in the checkpointer and startup process and false everywhere\n> else, which does not sound like a great idea. It seems fine to say\n> that xlog insertion is allowed in some processes but not others,\n> because not all processes are necessarily equally privileged, but\n> whether or not we're in recovery is supposed to be something about\n> which everyone agrees, so answering that question differently in\n> different processes doesn't seem nice. XLogInsertAllowed() could be\n> rewritten to check the state directly and make its own determination,\n> without relying on RecoveryInProgress(), and I think that might be the\n> right way to go here.\n>\n> But that isn't entirely problem-free either, because there's a lot of\n> code that uses RecoveryInProgress() to answer the question \"should I\n> write WAL?\" and therefore it's not great if RecoveryInProgress() is\n> returning an answer that is inconsistent with XLogInsertAllowed().\n> MarkBufferDirtyHint() and heap_page_prune_opt() are examples of this\n> kind of coding. It probably wouldn't break in practice right away,\n> because most of that code never runs in the startup process or the\n> checkpointer and would therefore never notice the difference in\n> behavior between those two functions, but if in the future we get the\n> read-only feature that this thread is supposed to be about, we'd have\n> problems. Not all RecoveryInProgress() calls have this sense - e.g.\n> sendDir() in basebackup.c is trying to figure out whether recovery\n> ended during the backup, not whether we can write WAL. But perhaps\n> this is a good time to go and replace RecoveryInProgress() checks that\n> are intending to decide whether or not it's OK to write WAL with\n> XLogInsertAllowed() checks (noting that the return value is reversed).\n> If we did that, then I think RecoveryInProgress() could also NOT call\n> InitXLOGAccess(), and that could be done only by XLogInsertAllowed(),\n> which seems like it might be better. But I haven't really tried to\n> code all of this up, so I'm not really sure how it all works out.\n>\n\nI agree that calling InitXLOGAccess() from RecoveryInProgress() is not\ngood, but I am not sure about calling it from XLogInsertAllowed()\neither, perhaps, both are status check function and general\nexpectations might be that status checking functions are not going\nchange and/or initialize the system state. InitXLOGAccess() should\nget called from the very first WAL write operation if needed, but if\nwe don't want to do that, then I would prefer to call InitXLOGAccess()\nfrom XLogInsertAllowed() instead of RecoveryInProgress().\n\nAs said before, if we were able to get rid of the need to\nend-of-recovery checkpoint [1] then we don't need separate handling in\nXLogInsertAllowed() for the Checkpointer process, that would be much\ncleaner and for the startup process, we would force\nXLogInsertAllowed() return true by calling LocalSetXLogInsertAllowed()\nfor the time being as we are doing right now.\n\nRegards,\nAmul\n\n1] \"using an end-of-recovery record in all cases\" :\nhttps://postgr.es/m/CAAJ_b95xPx6oHRb5VEatGbp-cLsZApf_9GWGtbv9dsFKiV_VDQ@mail.gmail.com",
"msg_date": "Mon, 18 Oct 2021 19:24:13 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Oct 18, 2021 at 9:54 AM Amul Sul <sulamul@gmail.com> wrote:\n> I tried this in the attached version, but I'm a bit skeptical with\n> changes that are needed for CreateCheckPoint(), those don't seem to be\n> clean.\n\nYeah, that doesn't look great. I don't think it's entirely correct,\nactually, because surely you want LocalXLogInsertAllowed = 0 to be\nexecuted even if !IsPostmasterEnvironment. It's only\nLocalXLogInsertAllowed = -1 that we would want to have depend on\nIsPostmasterEnvironment. But that's pretty ugly too: I guess the\nreason it has to be like is that, if it does that unconditionally, it\nwill overwrite the temporary value of 1 set by the caller, which will\nthen cause problems when the caller tries to XLogReportParameters().\n\nI think that problem goes away if we drive the decision off of shared\nstate rather than a local variable, but I agree that it's otherwise a\nbit tricky to untangle. One idea might be to have\nLocalSetXLogInsertAllowed return the old value. Then we could use the\nsame kind of coding we do when switching memory contexts, where we\nsay:\n\noldcontext = MemoryContextSwitchTo(something);\n// do stuff\nMemoryContextSwitchTo(oldcontext);\n\nHere we could maybe do:\n\noldxlallowed = LocalSetXLogInsertAllowed();\n// do stuff\nXLogInsertAllowed = oldxlallowed;\n\nThat way, instead of CreateCheckPoint() knowing under what\ncircumstances the caller might have changed the value, it only knows\nthat some callers might have already changed the value. That seems\nbetter.\n\n> I agree that calling InitXLOGAccess() from RecoveryInProgress() is not\n> good, but I am not sure about calling it from XLogInsertAllowed()\n> either, perhaps, both are status check function and general\n> expectations might be that status checking functions are not going\n> change and/or initialize the system state. InitXLOGAccess() should\n> get called from the very first WAL write operation if needed, but if\n> we don't want to do that, then I would prefer to call InitXLOGAccess()\n> from XLogInsertAllowed() instead of RecoveryInProgress().\n\nWell, that's a fair point, too, but it might not be safe to, say, move\nthis to XLogBeginInsert(). Like, imagine that there's a hypothetical\npiece of code that looks like this:\n\nif (RecoveryInProgress())\n ereport(ERROR, errmsg(\"can't do that in recovery\")));\n\n// do something here that depends on ThisTimeLineID or\nwal_segment_size or RedoRecPtr\n\nXLogBeginInsert();\n....\nlsn = XLogInsert(...);\n\nSuch code would work correctly the way things are today, but if the\nInitXLOGAccess() call were deferred until XLogBeginInsert() time, then\nit would fail.\n\nI was curious whether this is just a theoretical problem. It turns out\nthat it's not. I wrote a couple of just-for-testing patches, which I\nattach here. The first one just adjusts things so that we'll fail an\nassertion if we try to make use of ThisTimeLineID before we've set it\nto a legal value. I had to exempt two places from these checks just\nfor 'make check-world' to pass; these are shown in the patch, and one\nor both of them might be existing bugs -- or maybe not, I haven't\nlooked too deeply. The second one then adjusts the patch to pretend\nthat ThisTimeLineID is not necessarily valid just because we've called\nInitXLOGAccess() but that it is valid after XLogBeginInsert(). With\nthat change, I find about a dozen places where, apparently, the early\ncall to InitXLOGAccess() is critical to getting ThisTimeLineID\nadjusted in time. So apparently a change of this type is not entirely\ntrivial. And this is just a quick test, and just for one of the three\nthings that get initialized here.\n\nOn the other hand, just moving it to XLogInsertAllowed() isn't\nrisk-free either and would likely require adjusting some of the same\nplaces I found with this test. So I guess if we want to do something\nlike this we need more study.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Oct 2021 18:20:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Oct 19, 2021 at 3:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 9:54 AM Amul Sul <sulamul@gmail.com> wrote:\n> > I tried this in the attached version, but I'm a bit skeptical with\n> > changes that are needed for CreateCheckPoint(), those don't seem to be\n> > clean.\n>\n> Yeah, that doesn't look great. I don't think it's entirely correct,\n> actually, because surely you want LocalXLogInsertAllowed = 0 to be\n> executed even if !IsPostmasterEnvironment. It's only\n> LocalXLogInsertAllowed = -1 that we would want to have depend on\n> IsPostmasterEnvironment. But that's pretty ugly too: I guess the\n> reason it has to be like is that, if it does that unconditionally, it\n> will overwrite the temporary value of 1 set by the caller, which will\n> then cause problems when the caller tries to XLogReportParameters().\n>\n> I think that problem goes away if we drive the decision off of shared\n> state rather than a local variable, but I agree that it's otherwise a\n> bit tricky to untangle. One idea might be to have\n> LocalSetXLogInsertAllowed return the old value. Then we could use the\n> same kind of coding we do when switching memory contexts, where we\n> say:\n>\n> oldcontext = MemoryContextSwitchTo(something);\n> // do stuff\n> MemoryContextSwitchTo(oldcontext);\n>\n> Here we could maybe do:\n>\n> oldxlallowed = LocalSetXLogInsertAllowed();\n> // do stuff\n> XLogInsertAllowed = oldxlallowed;\n>\n\nOk, did the same in the attached 0001 patch.\n\nThere is no harm in calling LocalSetXLogInsertAllowed() calling\nmultiple times, but the problem I can see is that with this patch user\nis allowed to call LocalSetXLogInsertAllowed() at the time it is\nsupposed not to be called e.g. when LocalXLogInsertAllowed = 0;\nWAL writes are explicitly disabled.\n\n> That way, instead of CreateCheckPoint() knowing under what\n> circumstances the caller might have changed the value, it only knows\n> that some callers might have already changed the value. That seems\n> better.\n>\n> > I agree that calling InitXLOGAccess() from RecoveryInProgress() is not\n> > good, but I am not sure about calling it from XLogInsertAllowed()\n> > either, perhaps, both are status check function and general\n> > expectations might be that status checking functions are not going\n> > change and/or initialize the system state. InitXLOGAccess() should\n> > get called from the very first WAL write operation if needed, but if\n> > we don't want to do that, then I would prefer to call InitXLOGAccess()\n> > from XLogInsertAllowed() instead of RecoveryInProgress().\n>\n> Well, that's a fair point, too, but it might not be safe to, say, move\n> this to XLogBeginInsert(). Like, imagine that there's a hypothetical\n> piece of code that looks like this:\n>\n> if (RecoveryInProgress())\n> ereport(ERROR, errmsg(\"can't do that in recovery\")));\n>\n> // do something here that depends on ThisTimeLineID or\n> wal_segment_size or RedoRecPtr\n>\n> XLogBeginInsert();\n> ....\n> lsn = XLogInsert(...);\n>\n> Such code would work correctly the way things are today, but if the\n> InitXLOGAccess() call were deferred until XLogBeginInsert() time, then\n> it would fail.\n>\n> I was curious whether this is just a theoretical problem. It turns out\n> that it's not. I wrote a couple of just-for-testing patches, which I\n> attach here. The first one just adjusts things so that we'll fail an\n> assertion if we try to make use of ThisTimeLineID before we've set it\n> to a legal value. I had to exempt two places from these checks just\n> for 'make check-world' to pass; these are shown in the patch, and one\n> or both of them might be existing bugs -- or maybe not, I haven't\n> looked too deeply. The second one then adjusts the patch to pretend\n> that ThisTimeLineID is not necessarily valid just because we've called\n> InitXLOGAccess() but that it is valid after XLogBeginInsert(). With\n> that change, I find about a dozen places where, apparently, the early\n> call to InitXLOGAccess() is critical to getting ThisTimeLineID\n> adjusted in time. So apparently a change of this type is not entirely\n> trivial. And this is just a quick test, and just for one of the three\n> things that get initialized here.\n>\n> On the other hand, just moving it to XLogInsertAllowed() isn't\n> risk-free either and would likely require adjusting some of the same\n> places I found with this test. So I guess if we want to do something\n> like this we need more study.\n>\n\nYeah, that requires a lot of energy and time -- not done anything\nrelated to this in the attached version.\n\nPlease have a look at the attached version where the 0001 patch does\nchange LocalSetXLogInsertAllowed() as said before. 0002 patch moves\nXLogReportParameters() closer to other wal write operations and\nremoves unnecessary LocalSetXLogInsertAllowed() calls. 0003 is code\nmovements adds XLogAcceptWrites() function same as the before, and\n0004 patch tries to remove the dependency. 0004 patch could change\nw.r.t. decision that is going to be made for the patch that I\nposted[1] to remove abortedRecPtr global variable. For now, I have\ncopied abortedRecPtr into shared memory. Thanks !\n\n1] https://postgr.es/m/CAAJ_b94Y75ZwMim+gxxexVwf_yzO-dChof90ky0dB2GstspNjA@mail.gmail.com\n\nRegards,\nAmul",
"msg_date": "Mon, 25 Oct 2021 12:34:23 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 3:05 AM Amul Sul <sulamul@gmail.com> wrote:\n> Ok, did the same in the attached 0001 patch.\n>\n> There is no harm in calling LocalSetXLogInsertAllowed() calling\n> multiple times, but the problem I can see is that with this patch user\n> is allowed to call LocalSetXLogInsertAllowed() at the time it is\n> supposed not to be called e.g. when LocalXLogInsertAllowed = 0;\n> WAL writes are explicitly disabled.\n\nI've pushed 0001 and 0002 but I reversed the order of them and made a\nfew other edits.\n\nI don't really see the issue you mention here as a problem. There's\nonly one place where we set LocalXLogInsertAllowed = 0, and I don't\nknow that we'll ever have another one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Oct 2021 10:45:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On 10/25/21, 7:50 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I've pushed 0001 and 0002 but I reversed the order of them and made a\r\n> few other edits.\r\n\r\nMy compiler is complaining about oldXLogAllowed possibly being used\r\nuninitialized in CreateCheckPoint(). AFAICT it can just be initially\r\nset to zero to silence this warning because it will, in fact, be\r\ninitialized properly when it is used.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 25 Oct 2021 19:14:44 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 3:14 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> My compiler is complaining about oldXLogAllowed possibly being used\n> uninitialized in CreateCheckPoint(). AFAICT it can just be initially\n> set to zero to silence this warning because it will, in fact, be\n> initialized properly when it is used.\n\nHmm, I guess I could have foreseen that, had I been a little bit\nsmarter than I am. I have committed a change to initialize it to 0 as\nyou propose.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Oct 2021 16:31:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On 10/25/21, 1:33 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Mon, Oct 25, 2021 at 3:14 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> My compiler is complaining about oldXLogAllowed possibly being used\r\n>> uninitialized in CreateCheckPoint(). AFAICT it can just be initially\r\n>> set to zero to silence this warning because it will, in fact, be\r\n>> initialized properly when it is used.\r\n>\r\n> Hmm, I guess I could have foreseen that, had I been a little bit\r\n> smarter than I am. I have committed a change to initialize it to 0 as\r\n> you propose.\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 25 Oct 2021 20:35:28 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 8:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Oct 25, 2021 at 3:05 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Ok, did the same in the attached 0001 patch.\n> >\n> > There is no harm in calling LocalSetXLogInsertAllowed() calling\n> > multiple times, but the problem I can see is that with this patch user\n> > is allowed to call LocalSetXLogInsertAllowed() at the time it is\n> > supposed not to be called e.g. when LocalXLogInsertAllowed = 0;\n> > WAL writes are explicitly disabled.\n>\n> I've pushed 0001 and 0002 but I reversed the order of them and made a\n> few other edits.\n>\n\nThank you!\n\nI have rebased the remaining patches on top of the latest master head\n(commit # e63ce9e8d6a).\n\nIn addition to that, I did the additional changes to 0002 where I\nhaven't included the change that tries to remove arguments of\nCleanupAfterArchiveRecovery() in the previous version. Because if we\nwant to use XLogCtl->replayEndTLI and XLogCtl->replayEndRecPtr to\nreplace EndOfLogTLI and EndOfLog arguments respectively, then we also\nneed to consider the case where EndOfLog is changing if the\nabort-record does exist. That can be decided only in XLogAcceptWrite()\nbefore the shared memory value related to abort-record is going to be\nclear.\n\nRegards,\nAmul",
"msg_date": "Tue, 26 Oct 2021 16:29:55 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Oct 26, 2021 at 4:29 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Mon, Oct 25, 2021 at 8:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Oct 25, 2021 at 3:05 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > Ok, did the same in the attached 0001 patch.\n> > >\n> > > There is no harm in calling LocalSetXLogInsertAllowed() calling\n> > > multiple times, but the problem I can see is that with this patch user\n> > > is allowed to call LocalSetXLogInsertAllowed() at the time it is\n> > > supposed not to be called e.g. when LocalXLogInsertAllowed = 0;\n> > > WAL writes are explicitly disabled.\n> >\n> > I've pushed 0001 and 0002 but I reversed the order of them and made a\n> > few other edits.\n> >\n>\n> Thank you!\n>\n> I have rebased the remaining patches on top of the latest master head\n> (commit # e63ce9e8d6a).\n>\n> In addition to that, I did the additional changes to 0002 where I\n> haven't included the change that tries to remove arguments of\n> CleanupAfterArchiveRecovery() in the previous version. Because if we\n> want to use XLogCtl->replayEndTLI and XLogCtl->replayEndRecPtr to\n> replace EndOfLogTLI and EndOfLog arguments respectively, then we also\n> need to consider the case where EndOfLog is changing if the\n> abort-record does exist. That can be decided only in XLogAcceptWrite()\n> before the shared memory value related to abort-record is going to be\n> clear.\n>\n\nAttached is the rebased version of refactoring as well as the\npg_prohibit_wal feature patches for the latest master head (commit #\n39a3105678a).\n\nI was planning to attach the rebased version of isolation test patches\nthat Mark has posted before[1], but some permutation tests are not\nstable, where expected errors get printed differently; therefore, I\ndropped that from the attachment, for now.\n\nRegards,\nAmul\n\n1] https://postgr.es/m/9BA3BA57-6B7B-45CB-B8D9-6B5EB0104FFA@enterprisedb.com",
"msg_date": "Mon, 8 Nov 2021 18:50:01 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Nov 8, 2021 at 8:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> Attached is the rebased version of refactoring as well as the\n> pg_prohibit_wal feature patches for the latest master head (commit #\n> 39a3105678a).\n\nI spent a lot of time today studying 0002, and specifically the\nquestion of whether EndOfLog must be the same as\nXLogCtl->replayEndRecPtr and whether EndOfLogTLI must be the same as\nXLogCtl->replayEndTLI.\n\nThe answer to the former question is \"no\" because, if we don't enter\nredo, XLogCtl->replayEndRecPtr won't be initialized at all. If we do\nenter redo, then I think it has to be the same unless something very\nweird happens. EndOfLog gets set like this:\n\n XLogBeginRead(xlogreader, LastRec);\n record = ReadRecord(xlogreader, PANIC, false, replayTLI);\n EndOfLog = EndRecPtr;\n\nIn every case that exists in our regression tests, EndRecPtr is the\nsame before these three lines of code as it is afterward. However, if\nyou test with recovery_target=immediate, you can get it to be\ndifferent, because in that case we drop out of the redo loop after\ncalling recoveryStopsBefore() rather than after calling\nrecoveryStopsAfter(). Similarly I'm fairly sure that if you use\nrecovery_target_inclusive=off you can likewise get it to be different\n(though I discovered the hard way that recovery_target_inclusive=off\nis ignored when you use recovery_target_name). It seems like a really\nbad thing that neither recovery_target=immediate nor\nrecovery_target_inclusive=off have any tests, and I think we ought to\nadd some.\n\nAnyway, in effect, these three lines of code have the effect of\nbacking up the xlogreader by one record when we stop before rather\nthan after a record that we're replaying. What that means is that\nEndOfLog is going to be the end+1 of the last record that we actually\nreplayed. There might be one more record that we read but did not\nreplay, and that record won't impact the value we end up with in\nEndOfLog. Now, XLogCtl->replayEndRecPtr is also that end+1 of the last\nrecord that we actually replayed. To put that another way, there's no\nway to exit the main redo loop after we set XLogCtl->replayEndRecPtr\nand before we change LastRec. So in the cases where\nXLogCtl->replayEndRecPtr gets initialized at all, it can only be\ndifferent from EndOfLog if something different happens when we re-read\nthe last-replayed WAL record than what happened when we read it the\nfirst time. That seems unlikely, and would be unfortunate it if it did\nhappen. I am inclined to think that it might be better not to reread\nthe record at all, though. As far as this patch goes, I think we need\na solution that doesn't involve fetching EndOfLog from a variable\nthat's only sometimes initialized and then not doing anything with it\nexcept in the cases where it was initialized.\n\nAs for EndOfLogTLI, I'm afraid I don't think that's the same thing as\nXLogCtl->replayEndTLI. Now, it's hard to be sure, because I don't\nthink the regression tests contain any scenarios where we run recovery\nand the values end up different. However, I think that the code sets\nEndOfLogTLI to the TLI of the last WAL file that we read, and I think\nXLogCtl->replayEndTLI gets set to the timeline from which that WAL\nrecord originated. So imagine that we are looking for WAL that ought\nto be in 000000010000000000000003 but we don't find it; instead we\nfind 000000020000000000000003 because our recovery target timeline is\n2, or something that has 2 in its history. We will read the WAL for\ntimeline 1 from this file which has timeline 2 in the file name. I\nthink if recovery ends in this file before the timeline switch, these\nvalues will be different. I did not try to construct a test case for\nthis today due to not having enough time, so it's possible that I'm\nwrong about this, but that's how it looks to me from the code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Nov 2021 15:48:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Sat, Nov 13, 2021 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 8, 2021 at 8:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Attached is the rebased version of refactoring as well as the\n> > pg_prohibit_wal feature patches for the latest master head (commit #\n> > 39a3105678a).\n>\n> I spent a lot of time today studying 0002, and specifically the\n> question of whether EndOfLog must be the same as\n> XLogCtl->replayEndRecPtr and whether EndOfLogTLI must be the same as\n> XLogCtl->replayEndTLI.\n>\n> The answer to the former question is \"no\" because, if we don't enter\n> redo, XLogCtl->replayEndRecPtr won't be initialized at all. If we do\n> enter redo, then I think it has to be the same unless something very\n> weird happens. EndOfLog gets set like this:\n>\n> XLogBeginRead(xlogreader, LastRec);\n> record = ReadRecord(xlogreader, PANIC, false, replayTLI);\n> EndOfLog = EndRecPtr;\n>\n> In every case that exists in our regression tests, EndRecPtr is the\n> same before these three lines of code as it is afterward. However, if\n> you test with recovery_target=immediate, you can get it to be\n> different, because in that case we drop out of the redo loop after\n> calling recoveryStopsBefore() rather than after calling\n> recoveryStopsAfter(). Similarly I'm fairly sure that if you use\n> recovery_target_inclusive=off you can likewise get it to be different\n> (though I discovered the hard way that recovery_target_inclusive=off\n> is ignored when you use recovery_target_name). It seems like a really\n> bad thing that neither recovery_target=immediate nor\n> recovery_target_inclusive=off have any tests, and I think we ought to\n> add some.\n>\n\nrecovery/t/003_recovery_targets.pl has test for\nrecovery_target=immediate but not for recovery_target_inclusive=off, we\ncan add that for recovery_target_lsn, recovery_target_time, and\nrecovery_target_xid case only where it affects.\n\n> Anyway, in effect, these three lines of code have the effect of\n> backing up the xlogreader by one record when we stop before rather\n> than after a record that we're replaying. What that means is that\n> EndOfLog is going to be the end+1 of the last record that we actually\n> replayed. There might be one more record that we read but did not\n> replay, and that record won't impact the value we end up with in\n> EndOfLog. Now, XLogCtl->replayEndRecPtr is also that end+1 of the last\n> record that we actually replayed. To put that another way, there's no\n> way to exit the main redo loop after we set XLogCtl->replayEndRecPtr\n> and before we change LastRec. So in the cases where\n> XLogCtl->replayEndRecPtr gets initialized at all, it can only be\n> different from EndOfLog if something different happens when we re-read\n> the last-replayed WAL record than what happened when we read it the\n> first time. That seems unlikely, and would be unfortunate it if it did\n> happen. I am inclined to think that it might be better not to reread\n> the record at all, though.\n\nThere are two reasons that the record is reread; first, one that you\nhave just explained where the redo loop drops out due to\nrecoveryStopsBefore() and another one is that InRecovery is false.\n\nIn the formal case at the end, redo while-loop does read a new record\nwhich in effect updates EndRecPtr and when we breaks the loop, we do\nreach the place where we do reread record -- where we do read the\nrecord (i.e. LastRec) before the record that redo loop has read and\nwhich correctly sets EndRecPtr. In the latter case, definitely, we\ndon't need any adjustment to EndRecPtr.\n\nSo technically one case needs reread but that is also not needed, we\nhave that value in XLogCtl->lastReplayedEndRecPtr. I do agree that we\ndo not need to reread the record, but EndOfLog and EndOfLogTLI should\nbe set conditionally something like:\n\nif (InRecovery)\n{\n EndOfLog = XLogCtl->lastReplayedEndRecPtr;\n EndOfLogTLI = XLogCtl->lastReplayedTLI;\n}\nelse\n{\n EndOfLog = EndRecPtr;\n EndOfLogTLI = replayTLI;\n}\n\n> As far as this patch goes, I think we need\n> a solution that doesn't involve fetching EndOfLog from a variable\n> that's only sometimes initialized and then not doing anything with it\n> except in the cases where it was initialized.\n>\n\nAnother reason could be EndOfLog changes further in the following case:\n\n/*\n * Actually, if WAL ended in an incomplete record, skip the parts that\n * made it through and start writing after the portion that persisted.\n * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n * we'll do as soon as we're open for writing new WAL.)\n */\nif (!XLogRecPtrIsInvalid(missingContrecPtr))\n{\n Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n EndOfLog = missingContrecPtr;\n}\n\nNow only solution that I can think is to copy EndOfLog (so\nEndOfLogTLI) into shared memory.\n\n> As for EndOfLogTLI, I'm afraid I don't think that's the same thing as\n> XLogCtl->replayEndTLI. Now, it's hard to be sure, because I don't\n> think the regression tests contain any scenarios where we run recovery\n> and the values end up different. However, I think that the code sets\n> EndOfLogTLI to the TLI of the last WAL file that we read, and I think\n> XLogCtl->replayEndTLI gets set to the timeline from which that WAL\n> record originated. So imagine that we are looking for WAL that ought\n> to be in 000000010000000000000003 but we don't find it; instead we\n> find 000000020000000000000003 because our recovery target timeline is\n> 2, or something that has 2 in its history. We will read the WAL for\n> timeline 1 from this file which has timeline 2 in the file name. I\n> think if recovery ends in this file before the timeline switch, these\n> values will be different. I did not try to construct a test case for\n> this today due to not having enough time, so it's possible that I'm\n> wrong about this, but that's how it looks to me from the code.\n>\n\nI am not sure, I have understood this scenario due to lack of\nexpertise in this area -- Why would the record we looking that ought\nto be in 000000010000000000000003 we don't find it? Possibly WAL\ncorruption or that file is missing?\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 17 Nov 2021 11:13:32 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 11:13 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Sat, Nov 13, 2021 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Nov 8, 2021 at 8:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > Attached is the rebased version of refactoring as well as the\n> > > pg_prohibit_wal feature patches for the latest master head (commit #\n> > > 39a3105678a).\n> >\n> > I spent a lot of time today studying 0002, and specifically the\n> > question of whether EndOfLog must be the same as\n> > XLogCtl->replayEndRecPtr and whether EndOfLogTLI must be the same as\n> > XLogCtl->replayEndTLI.\n> >\n> > The answer to the former question is \"no\" because, if we don't enter\n> > redo, XLogCtl->replayEndRecPtr won't be initialized at all. If we do\n> > enter redo, then I think it has to be the same unless something very\n> > weird happens. EndOfLog gets set like this:\n> >\n> > XLogBeginRead(xlogreader, LastRec);\n> > record = ReadRecord(xlogreader, PANIC, false, replayTLI);\n> > EndOfLog = EndRecPtr;\n> >\n> > In every case that exists in our regression tests, EndRecPtr is the\n> > same before these three lines of code as it is afterward. However, if\n> > you test with recovery_target=immediate, you can get it to be\n> > different, because in that case we drop out of the redo loop after\n> > calling recoveryStopsBefore() rather than after calling\n> > recoveryStopsAfter(). Similarly I'm fairly sure that if you use\n> > recovery_target_inclusive=off you can likewise get it to be different\n> > (though I discovered the hard way that recovery_target_inclusive=off\n> > is ignored when you use recovery_target_name). It seems like a really\n> > bad thing that neither recovery_target=immediate nor\n> > recovery_target_inclusive=off have any tests, and I think we ought to\n> > add some.\n> >\n>\n> recovery/t/003_recovery_targets.pl has test for\n> recovery_target=immediate but not for recovery_target_inclusive=off, we\n> can add that for recovery_target_lsn, recovery_target_time, and\n> recovery_target_xid case only where it affects.\n>\n> > Anyway, in effect, these three lines of code have the effect of\n> > backing up the xlogreader by one record when we stop before rather\n> > than after a record that we're replaying. What that means is that\n> > EndOfLog is going to be the end+1 of the last record that we actually\n> > replayed. There might be one more record that we read but did not\n> > replay, and that record won't impact the value we end up with in\n> > EndOfLog. Now, XLogCtl->replayEndRecPtr is also that end+1 of the last\n> > record that we actually replayed. To put that another way, there's no\n> > way to exit the main redo loop after we set XLogCtl->replayEndRecPtr\n> > and before we change LastRec. So in the cases where\n> > XLogCtl->replayEndRecPtr gets initialized at all, it can only be\n> > different from EndOfLog if something different happens when we re-read\n> > the last-replayed WAL record than what happened when we read it the\n> > first time. That seems unlikely, and would be unfortunate it if it did\n> > happen. I am inclined to think that it might be better not to reread\n> > the record at all, though.\n>\n> There are two reasons that the record is reread; first, one that you\n> have just explained where the redo loop drops out due to\n> recoveryStopsBefore() and another one is that InRecovery is false.\n>\n> In the formal case at the end, redo while-loop does read a new record\n> which in effect updates EndRecPtr and when we breaks the loop, we do\n> reach the place where we do reread record -- where we do read the\n> record (i.e. LastRec) before the record that redo loop has read and\n> which correctly sets EndRecPtr. In the latter case, definitely, we\n> don't need any adjustment to EndRecPtr.\n>\n> So technically one case needs reread but that is also not needed, we\n> have that value in XLogCtl->lastReplayedEndRecPtr. I do agree that we\n> do not need to reread the record, but EndOfLog and EndOfLogTLI should\n> be set conditionally something like:\n>\n> if (InRecovery)\n> {\n> EndOfLog = XLogCtl->lastReplayedEndRecPtr;\n> EndOfLogTLI = XLogCtl->lastReplayedTLI;\n> }\n> else\n> {\n> EndOfLog = EndRecPtr;\n> EndOfLogTLI = replayTLI;\n> }\n>\n> > As far as this patch goes, I think we need\n> > a solution that doesn't involve fetching EndOfLog from a variable\n> > that's only sometimes initialized and then not doing anything with it\n> > except in the cases where it was initialized.\n> >\n>\n> Another reason could be EndOfLog changes further in the following case:\n>\n> /*\n> * Actually, if WAL ended in an incomplete record, skip the parts that\n> * made it through and start writing after the portion that persisted.\n> * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n> * we'll do as soon as we're open for writing new WAL.)\n> */\n> if (!XLogRecPtrIsInvalid(missingContrecPtr))\n> {\n> Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n> EndOfLog = missingContrecPtr;\n> }\n>\n> Now only solution that I can think is to copy EndOfLog (so\n> EndOfLogTLI) into shared memory.\n>\n> > As for EndOfLogTLI, I'm afraid I don't think that's the same thing as\n> > XLogCtl->replayEndTLI. Now, it's hard to be sure, because I don't\n> > think the regression tests contain any scenarios where we run recovery\n> > and the values end up different. However, I think that the code sets\n> > EndOfLogTLI to the TLI of the last WAL file that we read, and I think\n> > XLogCtl->replayEndTLI gets set to the timeline from which that WAL\n> > record originated. So imagine that we are looking for WAL that ought\n> > to be in 000000010000000000000003 but we don't find it; instead we\n> > find 000000020000000000000003 because our recovery target timeline is\n> > 2, or something that has 2 in its history. We will read the WAL for\n> > timeline 1 from this file which has timeline 2 in the file name. I\n> > think if recovery ends in this file before the timeline switch, these\n> > values will be different. I did not try to construct a test case for\n> > this today due to not having enough time, so it's possible that I'm\n> > wrong about this, but that's how it looks to me from the code.\n> >\n>\n> I am not sure, I have understood this scenario due to lack of\n> expertise in this area -- Why would the record we looking that ought\n> to be in 000000010000000000000003 we don't find it? Possibly WAL\n> corruption or that file is missing?\n>\n\nOn further study, XLogPageRead(), WaitForWALToBecomeAvailable(), and\nXLogFileReadAnyTLI(), I think I could make a sense that there could be\na case where the record belong to TLI 1 we are looking for; we might\nopen the file with TLI 2. But, I am wondering what's wrong if we say\nthat TLI 1 for that record even if we read it from the file has TLI 2 or 3 or 4\nin its file name -- that statement is still true, and that record\nshould be still accessible from the filename with TLI 1. Also, if we\ngoing to consider this reading record exists before the timeline\nswitch point as the EndOfLog then why should be worried about the\nlatter timeline switch which eventually everything after the EndOfLog\ngoing to be useless for us. We might continue switching TLI and/or\nwriting the WAL right after EndOfLog, correct me if I am missing\nsomething here.\n\nFurther, I still think replayEndTLI has set to the correct value what\nwe looking for EndOfLogTLI when we go through the redo loop. When it\nread the record and finds a change in the current replayTLI then it\nupdates that as:\n\nif (newReplayTLI != replayTLI)\n{\n /* Check that it's OK to switch to this TLI */\n checkTimeLineSwitch(EndRecPtr, newReplayTLI,\n prevReplayTLI, replayTLI);\n\n /* Following WAL records should be run with new TLI */\n replayTLI = newReplayTLI;\n switchedTLI = true;\n}\n\nThen replayEndTLI gets updated. If we going to skip the reread of\n\"LastRec\" that we were discussing, then I think the following code\nthat fetches the EndOfLogTLI is also not needed, XLogCtl->replayEndTLI\n(or XLogCtl->lastReplayedTLI) or replayTLI (when InRecovery is false)\nshould be enough, AFAICU.\n\n/*\n * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n * the end-of-log. It could be different from the timeline that EndOfLog\n * nominally belongs to, if there was a timeline switch in that segment,\n * and we were reading the old WAL from a segment belonging to a higher\n * timeline.\n */\nEndOfLogTLI = xlogreader->seg.ws_tli;\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 17 Nov 2021 16:07:15 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": " On Wed, Nov 17, 2021 at 4:07 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Nov 17, 2021 at 11:13 AM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Sat, Nov 13, 2021 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 8, 2021 at 8:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > > Attached is the rebased version of refactoring as well as the\n> > > > pg_prohibit_wal feature patches for the latest master head (commit #\n> > > > 39a3105678a).\n> > >\n> > > I spent a lot of time today studying 0002, and specifically the\n> > > question of whether EndOfLog must be the same as\n> > > XLogCtl->replayEndRecPtr and whether EndOfLogTLI must be the same as\n> > > XLogCtl->replayEndTLI.\n> > >\n> > > The answer to the former question is \"no\" because, if we don't enter\n> > > redo, XLogCtl->replayEndRecPtr won't be initialized at all. If we do\n> > > enter redo, then I think it has to be the same unless something very\n> > > weird happens. EndOfLog gets set like this:\n> > >\n> > > XLogBeginRead(xlogreader, LastRec);\n> > > record = ReadRecord(xlogreader, PANIC, false, replayTLI);\n> > > EndOfLog = EndRecPtr;\n> > >\n> > > In every case that exists in our regression tests, EndRecPtr is the\n> > > same before these three lines of code as it is afterward. However, if\n> > > you test with recovery_target=immediate, you can get it to be\n> > > different, because in that case we drop out of the redo loop after\n> > > calling recoveryStopsBefore() rather than after calling\n> > > recoveryStopsAfter(). Similarly I'm fairly sure that if you use\n> > > recovery_target_inclusive=off you can likewise get it to be different\n> > > (though I discovered the hard way that recovery_target_inclusive=off\n> > > is ignored when you use recovery_target_name). It seems like a really\n> > > bad thing that neither recovery_target=immediate nor\n> > > recovery_target_inclusive=off have any tests, and I think we ought to\n> > > add some.\n> > >\n> >\n> > recovery/t/003_recovery_targets.pl has test for\n> > recovery_target=immediate but not for recovery_target_inclusive=off, we\n> > can add that for recovery_target_lsn, recovery_target_time, and\n> > recovery_target_xid case only where it affects.\n> >\n> > > Anyway, in effect, these three lines of code have the effect of\n> > > backing up the xlogreader by one record when we stop before rather\n> > > than after a record that we're replaying. What that means is that\n> > > EndOfLog is going to be the end+1 of the last record that we actually\n> > > replayed. There might be one more record that we read but did not\n> > > replay, and that record won't impact the value we end up with in\n> > > EndOfLog. Now, XLogCtl->replayEndRecPtr is also that end+1 of the last\n> > > record that we actually replayed. To put that another way, there's no\n> > > way to exit the main redo loop after we set XLogCtl->replayEndRecPtr\n> > > and before we change LastRec. So in the cases where\n> > > XLogCtl->replayEndRecPtr gets initialized at all, it can only be\n> > > different from EndOfLog if something different happens when we re-read\n> > > the last-replayed WAL record than what happened when we read it the\n> > > first time. That seems unlikely, and would be unfortunate it if it did\n> > > happen. I am inclined to think that it might be better not to reread\n> > > the record at all, though.\n> >\n> > There are two reasons that the record is reread; first, one that you\n> > have just explained where the redo loop drops out due to\n> > recoveryStopsBefore() and another one is that InRecovery is false.\n> >\n> > In the formal case at the end, redo while-loop does read a new record\n> > which in effect updates EndRecPtr and when we breaks the loop, we do\n> > reach the place where we do reread record -- where we do read the\n> > record (i.e. LastRec) before the record that redo loop has read and\n> > which correctly sets EndRecPtr. In the latter case, definitely, we\n> > don't need any adjustment to EndRecPtr.\n> >\n> > So technically one case needs reread but that is also not needed, we\n> > have that value in XLogCtl->lastReplayedEndRecPtr. I do agree that we\n> > do not need to reread the record, but EndOfLog and EndOfLogTLI should\n> > be set conditionally something like:\n> >\n> > if (InRecovery)\n> > {\n> > EndOfLog = XLogCtl->lastReplayedEndRecPtr;\n> > EndOfLogTLI = XLogCtl->lastReplayedTLI;\n> > }\n> > else\n> > {\n> > EndOfLog = EndRecPtr;\n> > EndOfLogTLI = replayTLI;\n> > }\n> >\n> > > As far as this patch goes, I think we need\n> > > a solution that doesn't involve fetching EndOfLog from a variable\n> > > that's only sometimes initialized and then not doing anything with it\n> > > except in the cases where it was initialized.\n> > >\n> >\n> > Another reason could be EndOfLog changes further in the following case:\n> >\n> > /*\n> > * Actually, if WAL ended in an incomplete record, skip the parts that\n> > * made it through and start writing after the portion that persisted.\n> > * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n> > * we'll do as soon as we're open for writing new WAL.)\n> > */\n> > if (!XLogRecPtrIsInvalid(missingContrecPtr))\n> > {\n> > Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n> > EndOfLog = missingContrecPtr;\n> > }\n> >\n> > Now only solution that I can think is to copy EndOfLog (so\n> > EndOfLogTLI) into shared memory.\n> >\n> > > As for EndOfLogTLI, I'm afraid I don't think that's the same thing as\n> > > XLogCtl->replayEndTLI. Now, it's hard to be sure, because I don't\n> > > think the regression tests contain any scenarios where we run recovery\n> > > and the values end up different. However, I think that the code sets\n> > > EndOfLogTLI to the TLI of the last WAL file that we read, and I think\n> > > XLogCtl->replayEndTLI gets set to the timeline from which that WAL\n> > > record originated. So imagine that we are looking for WAL that ought\n> > > to be in 000000010000000000000003 but we don't find it; instead we\n> > > find 000000020000000000000003 because our recovery target timeline is\n> > > 2, or something that has 2 in its history. We will read the WAL for\n> > > timeline 1 from this file which has timeline 2 in the file name. I\n> > > think if recovery ends in this file before the timeline switch, these\n> > > values will be different. I did not try to construct a test case for\n> > > this today due to not having enough time, so it's possible that I'm\n> > > wrong about this, but that's how it looks to me from the code.\n> > >\n> >\n> > I am not sure, I have understood this scenario due to lack of\n> > expertise in this area -- Why would the record we looking that ought\n> > to be in 000000010000000000000003 we don't find it? Possibly WAL\n> > corruption or that file is missing?\n> >\n>\n> On further study, XLogPageRead(), WaitForWALToBecomeAvailable(), and\n> XLogFileReadAnyTLI(), I think I could make a sense that there could be\n> a case where the record belong to TLI 1 we are looking for; we might\n> open the file with TLI 2. But, I am wondering what's wrong if we say\n> that TLI 1 for that record even if we read it from the file has TLI 2 or 3 or 4\n> in its file name -- that statement is still true, and that record\n> should be still accessible from the filename with TLI 1. Also, if we\n> going to consider this reading record exists before the timeline\n> switch point as the EndOfLog then why should be worried about the\n> latter timeline switch which eventually everything after the EndOfLog\n> going to be useless for us. We might continue switching TLI and/or\n> writing the WAL right after EndOfLog, correct me if I am missing\n> something here.\n>\n> Further, I still think replayEndTLI has set to the correct value what\n> we looking for EndOfLogTLI when we go through the redo loop. When it\n> read the record and finds a change in the current replayTLI then it\n> updates that as:\n>\n> if (newReplayTLI != replayTLI)\n> {\n> /* Check that it's OK to switch to this TLI */\n> checkTimeLineSwitch(EndRecPtr, newReplayTLI,\n> prevReplayTLI, replayTLI);\n>\n> /* Following WAL records should be run with new TLI */\n> replayTLI = newReplayTLI;\n> switchedTLI = true;\n> }\n>\n> Then replayEndTLI gets updated. If we going to skip the reread of\n> \"LastRec\" that we were discussing, then I think the following code\n> that fetches the EndOfLogTLI is also not needed, XLogCtl->replayEndTLI\n> (or XLogCtl->lastReplayedTLI) or replayTLI (when InRecovery is false)\n> should be enough, AFAICU.\n>\n> /*\n> * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n> * the end-of-log. It could be different from the timeline that EndOfLog\n> * nominally belongs to, if there was a timeline switch in that segment,\n> * and we were reading the old WAL from a segment belonging to a higher\n> * timeline.\n> */\n> EndOfLogTLI = xlogreader->seg.ws_tli;\n>\n\nI think I found the right case for this, above TLI fetch is needed in\nthe case where we do restore from the archived WAL files. In my trial,\nthe archive directory has files as below (Kindly ignore the extra\nhistory file, I perform a few more trials to be sure):\n\n-rw-------. 1 amul amul 16777216 Nov 17 06:36 00000004000000000000001E\n-rw-------. 1 amul amul 16777216 Nov 17 06:39 00000004000000000000001F.partial\n-rw-------. 1 amul amul 128 Nov 17 06:36 00000004.history\n-rw-------. 1 amul amul 16777216 Nov 17 06:40 00000005000000000000001F\n-rw-------. 1 amul amul 171 Nov 17 06:39 00000005.history\n-rw-------. 1 amul amul 209 Nov 17 06:45 00000006.history\n-rw-------. 1 amul amul 247 Nov 17 06:52 00000007.history\n\nThe timeline is switched in 1F file but the archiver has backup older\ntimeline file and renamed it. While performing PITR using these\narchived files, the .partitial file seems to be skipped from the\nrestore. The file with the next timeline id is selected to read the\nrecords that belong to the previous timeline id as well (i.e. 4 here,\nall the records before timeline switch point). Here is the files\ninside pg_wal directory after restore, note that in the current\nexperiment, I chose recovery_target_xid = <just before the timeline#5\nswitch point > and then recovery_target_action = 'promote':\n\n-rw-------. 1 amul amul 85 Nov 17 07:33 00000003.history\n-rw-------. 1 amul amul 16777216 Nov 17 07:33 00000004000000000000001E\n-rw-------. 1 amul amul 128 Nov 17 07:33 00000004.history\n-rw-------. 1 amul amul 16777216 Nov 17 07:33 00000005000000000000001F\n-rw-------. 1 amul amul 171 Nov 17 07:33 00000005.history\n-rw-------. 1 amul amul 209 Nov 17 07:33 00000006.history\n-rw-------. 1 amul amul 247 Nov 17 07:33 00000007.history\n-rw-------. 1 amul amul 16777216 Nov 17 07:33 00000008000000000000001F\n\nThe last one is the new WAL file created in that cluster.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 17 Nov 2021 18:20:42 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 6:20 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Nov 17, 2021 at 4:07 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Wed, Nov 17, 2021 at 11:13 AM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > On Sat, Nov 13, 2021 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 8, 2021 at 8:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > > > Attached is the rebased version of refactoring as well as the\n> > > > > pg_prohibit_wal feature patches for the latest master head (commit #\n> > > > > 39a3105678a).\n> > > >\n> > > > I spent a lot of time today studying 0002, and specifically the\n> > > > question of whether EndOfLog must be the same as\n> > > > XLogCtl->replayEndRecPtr and whether EndOfLogTLI must be the same as\n> > > > XLogCtl->replayEndTLI.\n> > > >\n> > > > The answer to the former question is \"no\" because, if we don't enter\n> > > > redo, XLogCtl->replayEndRecPtr won't be initialized at all. If we do\n> > > > enter redo, then I think it has to be the same unless something very\n> > > > weird happens. EndOfLog gets set like this:\n> > > >\n> > > > XLogBeginRead(xlogreader, LastRec);\n> > > > record = ReadRecord(xlogreader, PANIC, false, replayTLI);\n> > > > EndOfLog = EndRecPtr;\n> > > >\n> > > > In every case that exists in our regression tests, EndRecPtr is the\n> > > > same before these three lines of code as it is afterward. However, if\n> > > > you test with recovery_target=immediate, you can get it to be\n> > > > different, because in that case we drop out of the redo loop after\n> > > > calling recoveryStopsBefore() rather than after calling\n> > > > recoveryStopsAfter(). Similarly I'm fairly sure that if you use\n> > > > recovery_target_inclusive=off you can likewise get it to be different\n> > > > (though I discovered the hard way that recovery_target_inclusive=off\n> > > > is ignored when you use recovery_target_name). It seems like a really\n> > > > bad thing that neither recovery_target=immediate nor\n> > > > recovery_target_inclusive=off have any tests, and I think we ought to\n> > > > add some.\n> > > >\n> > >\n> > > recovery/t/003_recovery_targets.pl has test for\n> > > recovery_target=immediate but not for recovery_target_inclusive=off, we\n> > > can add that for recovery_target_lsn, recovery_target_time, and\n> > > recovery_target_xid case only where it affects.\n> > >\n> > > > Anyway, in effect, these three lines of code have the effect of\n> > > > backing up the xlogreader by one record when we stop before rather\n> > > > than after a record that we're replaying. What that means is that\n> > > > EndOfLog is going to be the end+1 of the last record that we actually\n> > > > replayed. There might be one more record that we read but did not\n> > > > replay, and that record won't impact the value we end up with in\n> > > > EndOfLog. Now, XLogCtl->replayEndRecPtr is also that end+1 of the last\n> > > > record that we actually replayed. To put that another way, there's no\n> > > > way to exit the main redo loop after we set XLogCtl->replayEndRecPtr\n> > > > and before we change LastRec. So in the cases where\n> > > > XLogCtl->replayEndRecPtr gets initialized at all, it can only be\n> > > > different from EndOfLog if something different happens when we re-read\n> > > > the last-replayed WAL record than what happened when we read it the\n> > > > first time. That seems unlikely, and would be unfortunate it if it did\n> > > > happen. I am inclined to think that it might be better not to reread\n> > > > the record at all, though.\n> > >\n> > > There are two reasons that the record is reread; first, one that you\n> > > have just explained where the redo loop drops out due to\n> > > recoveryStopsBefore() and another one is that InRecovery is false.\n> > >\n> > > In the formal case at the end, redo while-loop does read a new record\n> > > which in effect updates EndRecPtr and when we breaks the loop, we do\n> > > reach the place where we do reread record -- where we do read the\n> > > record (i.e. LastRec) before the record that redo loop has read and\n> > > which correctly sets EndRecPtr. In the latter case, definitely, we\n> > > don't need any adjustment to EndRecPtr.\n> > >\n> > > So technically one case needs reread but that is also not needed, we\n> > > have that value in XLogCtl->lastReplayedEndRecPtr. I do agree that we\n> > > do not need to reread the record, but EndOfLog and EndOfLogTLI should\n> > > be set conditionally something like:\n> > >\n> > > if (InRecovery)\n> > > {\n> > > EndOfLog = XLogCtl->lastReplayedEndRecPtr;\n> > > EndOfLogTLI = XLogCtl->lastReplayedTLI;\n> > > }\n> > > else\n> > > {\n> > > EndOfLog = EndRecPtr;\n> > > EndOfLogTLI = replayTLI;\n> > > }\n> > >\n> > > > As far as this patch goes, I think we need\n> > > > a solution that doesn't involve fetching EndOfLog from a variable\n> > > > that's only sometimes initialized and then not doing anything with it\n> > > > except in the cases where it was initialized.\n> > > >\n> > >\n> > > Another reason could be EndOfLog changes further in the following case:\n> > >\n> > > /*\n> > > * Actually, if WAL ended in an incomplete record, skip the parts that\n> > > * made it through and start writing after the portion that persisted.\n> > > * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n> > > * we'll do as soon as we're open for writing new WAL.)\n> > > */\n> > > if (!XLogRecPtrIsInvalid(missingContrecPtr))\n> > > {\n> > > Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n> > > EndOfLog = missingContrecPtr;\n> > > }\n> > >\n> > > Now only solution that I can think is to copy EndOfLog (so\n> > > EndOfLogTLI) into shared memory.\n> > >\n> > > > As for EndOfLogTLI, I'm afraid I don't think that's the same thing as\n> > > > XLogCtl->replayEndTLI. Now, it's hard to be sure, because I don't\n> > > > think the regression tests contain any scenarios where we run recovery\n> > > > and the values end up different. However, I think that the code sets\n> > > > EndOfLogTLI to the TLI of the last WAL file that we read, and I think\n> > > > XLogCtl->replayEndTLI gets set to the timeline from which that WAL\n> > > > record originated. So imagine that we are looking for WAL that ought\n> > > > to be in 000000010000000000000003 but we don't find it; instead we\n> > > > find 000000020000000000000003 because our recovery target timeline is\n> > > > 2, or something that has 2 in its history. We will read the WAL for\n> > > > timeline 1 from this file which has timeline 2 in the file name. I\n> > > > think if recovery ends in this file before the timeline switch, these\n> > > > values will be different. I did not try to construct a test case for\n> > > > this today due to not having enough time, so it's possible that I'm\n> > > > wrong about this, but that's how it looks to me from the code.\n> > > >\n> > >\n> > > I am not sure, I have understood this scenario due to lack of\n> > > expertise in this area -- Why would the record we looking that ought\n> > > to be in 000000010000000000000003 we don't find it? Possibly WAL\n> > > corruption or that file is missing?\n> > >\n> >\n> > On further study, XLogPageRead(), WaitForWALToBecomeAvailable(), and\n> > XLogFileReadAnyTLI(), I think I could make a sense that there could be\n> > a case where the record belong to TLI 1 we are looking for; we might\n> > open the file with TLI 2. But, I am wondering what's wrong if we say\n> > that TLI 1 for that record even if we read it from the file has TLI 2 or 3 or 4\n> > in its file name -- that statement is still true, and that record\n> > should be still accessible from the filename with TLI 1. Also, if we\n> > going to consider this reading record exists before the timeline\n> > switch point as the EndOfLog then why should be worried about the\n> > latter timeline switch which eventually everything after the EndOfLog\n> > going to be useless for us. We might continue switching TLI and/or\n> > writing the WAL right after EndOfLog, correct me if I am missing\n> > something here.\n> >\n> > Further, I still think replayEndTLI has set to the correct value what\n> > we looking for EndOfLogTLI when we go through the redo loop. When it\n> > read the record and finds a change in the current replayTLI then it\n> > updates that as:\n> >\n> > if (newReplayTLI != replayTLI)\n> > {\n> > /* Check that it's OK to switch to this TLI */\n> > checkTimeLineSwitch(EndRecPtr, newReplayTLI,\n> > prevReplayTLI, replayTLI);\n> >\n> > /* Following WAL records should be run with new TLI */\n> > replayTLI = newReplayTLI;\n> > switchedTLI = true;\n> > }\n> >\n> > Then replayEndTLI gets updated. If we going to skip the reread of\n> > \"LastRec\" that we were discussing, then I think the following code\n> > that fetches the EndOfLogTLI is also not needed, XLogCtl->replayEndTLI\n> > (or XLogCtl->lastReplayedTLI) or replayTLI (when InRecovery is false)\n> > should be enough, AFAICU.\n> >\n> > /*\n> > * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n> > * the end-of-log. It could be different from the timeline that EndOfLog\n> > * nominally belongs to, if there was a timeline switch in that segment,\n> > * and we were reading the old WAL from a segment belonging to a higher\n> > * timeline.\n> > */\n> > EndOfLogTLI = xlogreader->seg.ws_tli;\n> >\n>\n> I think I found the right case for this, above TLI fetch is needed in\n> the case where we do restore from the archived WAL files. In my trial,\n> the archive directory has files as below (Kindly ignore the extra\n> history file, I perform a few more trials to be sure):\n>\n> -rw-------. 1 amul amul 16777216 Nov 17 06:36 00000004000000000000001E\n> -rw-------. 1 amul amul 16777216 Nov 17 06:39 00000004000000000000001F.partial\n> -rw-------. 1 amul amul 128 Nov 17 06:36 00000004.history\n> -rw-------. 1 amul amul 16777216 Nov 17 06:40 00000005000000000000001F\n> -rw-------. 1 amul amul 171 Nov 17 06:39 00000005.history\n> -rw-------. 1 amul amul 209 Nov 17 06:45 00000006.history\n> -rw-------. 1 amul amul 247 Nov 17 06:52 00000007.history\n>\n> The timeline is switched in 1F file but the archiver has backup older\n> timeline file and renamed it. While performing PITR using these\n> archived files, the .partitial file seems to be skipped from the\n> restore. The file with the next timeline id is selected to read the\n> records that belong to the previous timeline id as well (i.e. 4 here,\n> all the records before timeline switch point). Here is the files\n> inside pg_wal directory after restore, note that in the current\n> experiment, I chose recovery_target_xid = <just before the timeline#5\n> switch point > and then recovery_target_action = 'promote':\n>\n> -rw-------. 1 amul amul 85 Nov 17 07:33 00000003.history\n> -rw-------. 1 amul amul 16777216 Nov 17 07:33 00000004000000000000001E\n> -rw-------. 1 amul amul 128 Nov 17 07:33 00000004.history\n> -rw-------. 1 amul amul 16777216 Nov 17 07:33 00000005000000000000001F\n> -rw-------. 1 amul amul 171 Nov 17 07:33 00000005.history\n> -rw-------. 1 amul amul 209 Nov 17 07:33 00000006.history\n> -rw-------. 1 amul amul 247 Nov 17 07:33 00000007.history\n> -rw-------. 1 amul amul 16777216 Nov 17 07:33 00000008000000000000001F\n>\n> The last one is the new WAL file created in that cluster.\n>\n\nWith this experiment, I think it is clear that the EndOfLogTLI can be\ndifferent from the replayEndTLI or lastReplayedTLI, and we don't have\nany other option to get that into other processes other than exporting\ninto shared memory. Similarly, we have bunch of option (e.g.\nreplayEndRecPtr, lastReplayedEndRecPtr, lastSegSwitchLSN etc) to get\nEndOfLog value but those are not perfect and reliable options.\n\nTherefore, in the attached patch, I have exported EndOfLog and\nEndOfLogTLI into shared memory and attached only the refactoring\npatches since there a bunch of other work needs to be done on the main\nASRO patches what I discussed with Robert off-list, thanks.\n\nRegards,\nAmul",
"msg_date": "Tue, 23 Nov 2021 19:23:22 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Tue, Nov 23, 2021 at 7:23 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Nov 17, 2021 at 6:20 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Wed, Nov 17, 2021 at 4:07 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > On Wed, Nov 17, 2021 at 11:13 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > >\n> > > > On Sat, Nov 13, 2021 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 8, 2021 at 8:20 AM Amul Sul <sulamul@gmail.com> wrote:\n> > > > > > Attached is the rebased version of refactoring as well as the\n> > > > > > pg_prohibit_wal feature patches for the latest master head (commit #\n> > > > > > 39a3105678a).\n> > > > >\n> > > > > I spent a lot of time today studying 0002, and specifically the\n> > > > > question of whether EndOfLog must be the same as\n> > > > > XLogCtl->replayEndRecPtr and whether EndOfLogTLI must be the same as\n> > > > > XLogCtl->replayEndTLI.\n> > > > >\n> > > > > The answer to the former question is \"no\" because, if we don't enter\n> > > > > redo, XLogCtl->replayEndRecPtr won't be initialized at all. If we do\n> > > > > enter redo, then I think it has to be the same unless something very\n> > > > > weird happens. EndOfLog gets set like this:\n> > > > >\n> > > > > XLogBeginRead(xlogreader, LastRec);\n> > > > > record = ReadRecord(xlogreader, PANIC, false, replayTLI);\n> > > > > EndOfLog = EndRecPtr;\n> > > > >\n> > > > > In every case that exists in our regression tests, EndRecPtr is the\n> > > > > same before these three lines of code as it is afterward. However, if\n> > > > > you test with recovery_target=immediate, you can get it to be\n> > > > > different, because in that case we drop out of the redo loop after\n> > > > > calling recoveryStopsBefore() rather than after calling\n> > > > > recoveryStopsAfter(). Similarly I'm fairly sure that if you use\n> > > > > recovery_target_inclusive=off you can likewise get it to be different\n> > > > > (though I discovered the hard way that recovery_target_inclusive=off\n> > > > > is ignored when you use recovery_target_name). It seems like a really\n> > > > > bad thing that neither recovery_target=immediate nor\n> > > > > recovery_target_inclusive=off have any tests, and I think we ought to\n> > > > > add some.\n> > > > >\n> > > >\n> > > > recovery/t/003_recovery_targets.pl has test for\n> > > > recovery_target=immediate but not for recovery_target_inclusive=off, we\n> > > > can add that for recovery_target_lsn, recovery_target_time, and\n> > > > recovery_target_xid case only where it affects.\n> > > >\n> > > > > Anyway, in effect, these three lines of code have the effect of\n> > > > > backing up the xlogreader by one record when we stop before rather\n> > > > > than after a record that we're replaying. What that means is that\n> > > > > EndOfLog is going to be the end+1 of the last record that we actually\n> > > > > replayed. There might be one more record that we read but did not\n> > > > > replay, and that record won't impact the value we end up with in\n> > > > > EndOfLog. Now, XLogCtl->replayEndRecPtr is also that end+1 of the last\n> > > > > record that we actually replayed. To put that another way, there's no\n> > > > > way to exit the main redo loop after we set XLogCtl->replayEndRecPtr\n> > > > > and before we change LastRec. So in the cases where\n> > > > > XLogCtl->replayEndRecPtr gets initialized at all, it can only be\n> > > > > different from EndOfLog if something different happens when we re-read\n> > > > > the last-replayed WAL record than what happened when we read it the\n> > > > > first time. That seems unlikely, and would be unfortunate it if it did\n> > > > > happen. I am inclined to think that it might be better not to reread\n> > > > > the record at all, though.\n> > > >\n> > > > There are two reasons that the record is reread; first, one that you\n> > > > have just explained where the redo loop drops out due to\n> > > > recoveryStopsBefore() and another one is that InRecovery is false.\n> > > >\n> > > > In the formal case at the end, redo while-loop does read a new record\n> > > > which in effect updates EndRecPtr and when we breaks the loop, we do\n> > > > reach the place where we do reread record -- where we do read the\n> > > > record (i.e. LastRec) before the record that redo loop has read and\n> > > > which correctly sets EndRecPtr. In the latter case, definitely, we\n> > > > don't need any adjustment to EndRecPtr.\n> > > >\n> > > > So technically one case needs reread but that is also not needed, we\n> > > > have that value in XLogCtl->lastReplayedEndRecPtr. I do agree that we\n> > > > do not need to reread the record, but EndOfLog and EndOfLogTLI should\n> > > > be set conditionally something like:\n> > > >\n> > > > if (InRecovery)\n> > > > {\n> > > > EndOfLog = XLogCtl->lastReplayedEndRecPtr;\n> > > > EndOfLogTLI = XLogCtl->lastReplayedTLI;\n> > > > }\n> > > > else\n> > > > {\n> > > > EndOfLog = EndRecPtr;\n> > > > EndOfLogTLI = replayTLI;\n> > > > }\n> > > >\n> > > > > As far as this patch goes, I think we need\n> > > > > a solution that doesn't involve fetching EndOfLog from a variable\n> > > > > that's only sometimes initialized and then not doing anything with it\n> > > > > except in the cases where it was initialized.\n> > > > >\n> > > >\n> > > > Another reason could be EndOfLog changes further in the following case:\n> > > >\n> > > > /*\n> > > > * Actually, if WAL ended in an incomplete record, skip the parts that\n> > > > * made it through and start writing after the portion that persisted.\n> > > > * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n> > > > * we'll do as soon as we're open for writing new WAL.)\n> > > > */\n> > > > if (!XLogRecPtrIsInvalid(missingContrecPtr))\n> > > > {\n> > > > Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n> > > > EndOfLog = missingContrecPtr;\n> > > > }\n> > > >\n> > > > Now only solution that I can think is to copy EndOfLog (so\n> > > > EndOfLogTLI) into shared memory.\n> > > >\n> > > > > As for EndOfLogTLI, I'm afraid I don't think that's the same thing as\n> > > > > XLogCtl->replayEndTLI. Now, it's hard to be sure, because I don't\n> > > > > think the regression tests contain any scenarios where we run recovery\n> > > > > and the values end up different. However, I think that the code sets\n> > > > > EndOfLogTLI to the TLI of the last WAL file that we read, and I think\n> > > > > XLogCtl->replayEndTLI gets set to the timeline from which that WAL\n> > > > > record originated. So imagine that we are looking for WAL that ought\n> > > > > to be in 000000010000000000000003 but we don't find it; instead we\n> > > > > find 000000020000000000000003 because our recovery target timeline is\n> > > > > 2, or something that has 2 in its history. We will read the WAL for\n> > > > > timeline 1 from this file which has timeline 2 in the file name. I\n> > > > > think if recovery ends in this file before the timeline switch, these\n> > > > > values will be different. I did not try to construct a test case for\n> > > > > this today due to not having enough time, so it's possible that I'm\n> > > > > wrong about this, but that's how it looks to me from the code.\n> > > > >\n> > > >\n> > > > I am not sure, I have understood this scenario due to lack of\n> > > > expertise in this area -- Why would the record we looking that ought\n> > > > to be in 000000010000000000000003 we don't find it? Possibly WAL\n> > > > corruption or that file is missing?\n> > > >\n> > >\n> > > On further study, XLogPageRead(), WaitForWALToBecomeAvailable(), and\n> > > XLogFileReadAnyTLI(), I think I could make a sense that there could be\n> > > a case where the record belong to TLI 1 we are looking for; we might\n> > > open the file with TLI 2. But, I am wondering what's wrong if we say\n> > > that TLI 1 for that record even if we read it from the file has TLI 2 or 3 or 4\n> > > in its file name -- that statement is still true, and that record\n> > > should be still accessible from the filename with TLI 1. Also, if we\n> > > going to consider this reading record exists before the timeline\n> > > switch point as the EndOfLog then why should be worried about the\n> > > latter timeline switch which eventually everything after the EndOfLog\n> > > going to be useless for us. We might continue switching TLI and/or\n> > > writing the WAL right after EndOfLog, correct me if I am missing\n> > > something here.\n> > >\n> > > Further, I still think replayEndTLI has set to the correct value what\n> > > we looking for EndOfLogTLI when we go through the redo loop. When it\n> > > read the record and finds a change in the current replayTLI then it\n> > > updates that as:\n> > >\n> > > if (newReplayTLI != replayTLI)\n> > > {\n> > > /* Check that it's OK to switch to this TLI */\n> > > checkTimeLineSwitch(EndRecPtr, newReplayTLI,\n> > > prevReplayTLI, replayTLI);\n> > >\n> > > /* Following WAL records should be run with new TLI */\n> > > replayTLI = newReplayTLI;\n> > > switchedTLI = true;\n> > > }\n> > >\n> > > Then replayEndTLI gets updated. If we going to skip the reread of\n> > > \"LastRec\" that we were discussing, then I think the following code\n> > > that fetches the EndOfLogTLI is also not needed, XLogCtl->replayEndTLI\n> > > (or XLogCtl->lastReplayedTLI) or replayTLI (when InRecovery is false)\n> > > should be enough, AFAICU.\n> > >\n> > > /*\n> > > * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n> > > * the end-of-log. It could be different from the timeline that EndOfLog\n> > > * nominally belongs to, if there was a timeline switch in that segment,\n> > > * and we were reading the old WAL from a segment belonging to a higher\n> > > * timeline.\n> > > */\n> > > EndOfLogTLI = xlogreader->seg.ws_tli;\n> > >\n> >\n> > I think I found the right case for this, above TLI fetch is needed in\n> > the case where we do restore from the archived WAL files. In my trial,\n> > the archive directory has files as below (Kindly ignore the extra\n> > history file, I perform a few more trials to be sure):\n> >\n> > -rw-------. 1 amul amul 16777216 Nov 17 06:36 00000004000000000000001E\n> > -rw-------. 1 amul amul 16777216 Nov 17 06:39 00000004000000000000001F.partial\n> > -rw-------. 1 amul amul 128 Nov 17 06:36 00000004.history\n> > -rw-------. 1 amul amul 16777216 Nov 17 06:40 00000005000000000000001F\n> > -rw-------. 1 amul amul 171 Nov 17 06:39 00000005.history\n> > -rw-------. 1 amul amul 209 Nov 17 06:45 00000006.history\n> > -rw-------. 1 amul amul 247 Nov 17 06:52 00000007.history\n> >\n> > The timeline is switched in 1F file but the archiver has backup older\n> > timeline file and renamed it. While performing PITR using these\n> > archived files, the .partitial file seems to be skipped from the\n> > restore. The file with the next timeline id is selected to read the\n> > records that belong to the previous timeline id as well (i.e. 4 here,\n> > all the records before timeline switch point). Here is the files\n> > inside pg_wal directory after restore, note that in the current\n> > experiment, I chose recovery_target_xid = <just before the timeline#5\n> > switch point > and then recovery_target_action = 'promote':\n> >\n> > -rw-------. 1 amul amul 85 Nov 17 07:33 00000003.history\n> > -rw-------. 1 amul amul 16777216 Nov 17 07:33 00000004000000000000001E\n> > -rw-------. 1 amul amul 128 Nov 17 07:33 00000004.history\n> > -rw-------. 1 amul amul 16777216 Nov 17 07:33 00000005000000000000001F\n> > -rw-------. 1 amul amul 171 Nov 17 07:33 00000005.history\n> > -rw-------. 1 amul amul 209 Nov 17 07:33 00000006.history\n> > -rw-------. 1 amul amul 247 Nov 17 07:33 00000007.history\n> > -rw-------. 1 amul amul 16777216 Nov 17 07:33 00000008000000000000001F\n> >\n> > The last one is the new WAL file created in that cluster.\n> >\n>\n> With this experiment, I think it is clear that the EndOfLogTLI can be\n> different from the replayEndTLI or lastReplayedTLI, and we don't have\n> any other option to get that into other processes other than exporting\n> into shared memory. Similarly, we have bunch of option (e.g.\n> replayEndRecPtr, lastReplayedEndRecPtr, lastSegSwitchLSN etc) to get\n> EndOfLog value but those are not perfect and reliable options.\n>\n> Therefore, in the attached patch, I have exported EndOfLog and\n> EndOfLogTLI into shared memory and attached only the refactoring\n> patches since there a bunch of other work needs to be done on the main\n> ASRO patches what I discussed with Robert off-list, thanks.\n>\n\nAttaching the rest of the patches. To execute XLogAcceptWrites() ->\nPerformRecoveryXLogAction() in Checkpointer process; ideally, we\nshould perform full checkpoint but we can't do that using current\nPerformRecoveryXLogAction() which would call RequestCheckpoint() with\nWAIT flags which make the Checkpointer process wait infinite on itself\nto finish the requested checkpoint, bad!!\n\nThe option we have is to change RequestCheckpoint() for the\nCheckpointer process directly call CreateCheckPoint() as we do for\n!IsPostmasterEnvironment case, but problem is that XLogWrite() running\ninside Checkpointer process can reach to CreateCheckPoint() and cause\nan unexpected behaviour that I have noted previously[1]. The\nRequestCheckpoint() from XLogWrite() when inside Checkpointer process\nis needed or not is need a separate discussion. For now, I have\nchanged PerformRecoveryXLogAction() to call CreateCheckPoint() for the\nCheckpointer process; in the v41-0003 version I tried to do the\nchanges to RequestCheckpoint() to avoid that but that change looks too\nugly.\n\nAnother problem is the recursive call to XLogAccepWrite() in the\nCheckpointer process due to the aforesaid CreateCheckPoint() call from\nPerformRecoveryXLogAction(). The reason is to avoid the delay in\nprocessing WAL prohibit state change requests we do have added\nProcessWALProhibitStateChangeRequest() call multiple places that\nCheckpointer can check and process while performing a long-running\ncheckpoint. When Checkpointer call CreateCheckPoint() from\nPerformRecoveryXLogAction() then that can also hit\nProcessWALProhibitStateChangeRequest() and since XLogAccepWrite()\noperation not completed yet that tried to do that again. To avoid that\nI have added a flag that avoids ProcessWALProhibitStateChangeRequest()\nexecution is that flag is set, see\nProcessWALProhibitStateChangeRequest() in attached 0003 patch.\n\nNote that both the issues, I noted above are boil down to\nCreateCheckPoint() and its need. If we don't need to perform a full\ncheckpoint in our case then we might not have that recursion issue.\nInstead, do the CreateEndOfRecoveryRecord() and then do the full\ncheckpoint that currently PerformRecoveryXLogAction() does for the\npromotion case but not having full checkpoint looks might look scary.\nI tried that and works fine for me, but I am not much confident about\nthat.\n\nRegards,\nAmul\n\n1] https://postgr.es/m/CAAJ_b97fPWU_yyOg97Y5AtSvx5mrg2cGyz260swz5x5iPKEM+g@mail.gmail.com",
"msg_date": "Fri, 26 Nov 2021 18:29:25 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attaching the later version, has a few additional changes that decide\nfor the Checkpointer process where it should be halt or not in the wal\nprohibited state; those changes are yet to be confirmed and tested\nthoroughly, thanks.\n\nRegards,\nAmul",
"msg_date": "Wed, 1 Dec 2021 10:29:15 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Attached is rebase version for the latest maste head(#891624f0ec).\n\n0001 and 0002 patch is changed a bit due to xlog.c refactoring\ncommit(#70e81861), needing a bit more thought to copy global variables into\nright shared memory structure. Also, I made some changes to the 0003\npatch to avoid\nXLogAcceptWrites() entrancing suggested in offline discussion.\n\nRegards,\nAmul",
"msg_date": "Fri, 8 Apr 2022 19:57:03 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 12:56 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > It is a very minor change, so I rebased the patch. Please take a look, if that works for you.\n> >\n>\n> Thanks, I am getting one more failure for the vacuumlazy.c. on the\n> latest master head(d75288fb27b), I fixed that in attached version.\n\nThanks Amul! I haven't looked at the whole thread, I may be repeating\nthings here, please bear with me.\n\n1) Is the pg_prohibit_wal() only user sets the wal prohibit mode? Or\ndo we still allow via 'ALTER SYSTEM READ ONLY/READ WRITE'? If not, I\nthink the patches still have ALTER SYSTEM READ ONLY references.\n2) IIUC, the idea of this patch is not to generate any new WAL when\nset as default_transaction_read_only and transaction_read_only can't\nguarantee that?\n3) IMO, the function name pg_prohibit_wal doesn't look good where it\nalso allows one to set WAL writes, how about the following functions -\npg_prohibit_wal or pg_disallow_wal_{generation, inserts} or\npg_allow_wal or pg_allow_wal_{generation, inserts} without any\narguments and if needed a common function\npg_set_wal_generation_state(read-only/read-write) something like that?\n4) It looks like only the checkpointer is setting the WAL prohibit\nstate? Is there a strong reason for that? Why can't the backend take a\nlock on prohibit state in shared memory and set it and let the\ncheckpointer read it and block itself from writing WAL?\n5) Is SIGUSR1 (which is multiplexed) being sent without a \"reason\" to\ncheckpointer? Why?\n6) What happens for long-running or in-progress transactions if\nsomeone prohibits WAL in the midst of them? Do these txns fail? Or do\nwe say that we will allow them to run to completion? Or do we fail\nthose txns at commit time? One might use this feature to say not let\nserver go out of disk space, but if we allow in-progress txns to\ngenerate/write WAL, then how can one achieve that with this feature?\nSay, I monitor my server in such a way that at 90% of disk space,\nprohibit WAL to avoid server crash. But if this feature allows\nin-progress txns to generate WAL, then the server may still crash?\n7) What are the other use-cases (I can think of - to avoid out of disk\ncrashes, block/freeze writes to database when the server is\ncompromised) with this feature? Any usages during/before failover,\npromotion or after it?\n8) Is there a strong reason that we've picked up conditional variable\nwal_prohibit_cv over mutex/lock for updating WALProhibit shared\nmemory?\n9) Any tests that you are planning to add?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 23 Apr 2022 13:33:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Sat, Apr 23, 2022 at 1:34 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Mar 15, 2021 at 12:56 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > It is a very minor change, so I rebased the patch. Please take a look, if that works for you.\n> > >\n> >\n> > Thanks, I am getting one more failure for the vacuumlazy.c. on the\n> > latest master head(d75288fb27b), I fixed that in attached version.\n>\n> Thanks Amul! I haven't looked at the whole thread, I may be repeating\n> things here, please bear with me.\n>\n\nNp, thanks for looking into it.\n\n> 1) Is the pg_prohibit_wal() only user sets the wal prohibit mode? Or\n> do we still allow via 'ALTER SYSTEM READ ONLY/READ WRITE'? If not, I\n> think the patches still have ALTER SYSTEM READ ONLY references.\n\nCould you please point me to what those references are? I didn't find\nany in the v45 version.\n\n> 2) IIUC, the idea of this patch is not to generate any new WAL when\n> set as default_transaction_read_only and transaction_read_only can't\n> guarantee that?\n\nNo. Complete WAL write should be disabled, in other words XLogInsert()\nshould be restricted.\n\n> 3) IMO, the function name pg_prohibit_wal doesn't look good where it\n> also allows one to set WAL writes, how about the following functions -\n> pg_prohibit_wal or pg_disallow_wal_{generation, inserts} or\n> pg_allow_wal or pg_allow_wal_{generation, inserts} without any\n> arguments and if needed a common function\n> pg_set_wal_generation_state(read-only/read-write) something like that?\n\nThere are already similar suggestions before too, but none of that\nfinalized yet, there are other more challenges that need to be\nhandled, so we can keep this work at last.\n\n> 4) It looks like only the checkpointer is setting the WAL prohibit\n> state? Is there a strong reason for that? Why can't the backend take a\n> lock on prohibit state in shared memory and set it and let the\n> checkpointer read it and block itself from writing WAL?\n\nOnce WAL prohibited state transition is initiated and should be\ncompleted, there is no fallback. What if the backed exit before the\ncomplete transition? Similarly, even if the checkpointer exits,\nthat will be restarted again and will complete the state transition.\n\n> 5) Is SIGUSR1 (which is multiplexed) being sent without a \"reason\" to\n> checkpointer? Why?\n\nSimply want to wake up the checkpointer process without asking for\nspecific work in the handle function. Another suitable choice will be\nSIGINT, we can choose that too if needed.\n\n> 6) What happens for long-running or in-progress transactions if\n> someone prohibits WAL in the midst of them? Do these txns fail? Or do\n> we say that we will allow them to run to completion? Or do we fail\n> those txns at commit time? One might use this feature to say not let\n> server go out of disk space, but if we allow in-progress txns to\n> generate/write WAL, then how can one achieve that with this feature?\n> Say, I monitor my server in such a way that at 90% of disk space,\n> prohibit WAL to avoid server crash. But if this feature allows\n> in-progress txns to generate WAL, then the server may still crash?\n\nRead-only transactions will be allowed to continue, and if that\ntransaction tries to write or any other transaction that has performed\nany writes already then the session running that transaction will be\nterminated -- the design is described in the first mail of this\nthread.\n\n> 7) What are the other use-cases (I can think of - to avoid out of disk\n> crashes, block/freeze writes to database when the server is\n> compromised) with this feature? Any usages during/before failover,\n> promotion or after it?\n\nThe important use case is for failover to avoid split-brain situations.\n\n> 8) Is there a strong reason that we've picked up conditional variable\n> wal_prohibit_cv over mutex/lock for updating WALProhibit shared\n> memory?\n\nI am not sure how that can be done using mutex or lock.\n\n> 9) Any tests that you are planning to add?\n\nYes, we can. I have added very sophisticated tests that cover most of\nmy code changes, but that is not enough for such critical code\nchanges, have a lot of chances of improvement and adding more tests\nfor this module as well as other parts e.g. some missing coverage of\ngin, gists, brin, core features where this patch is adding checks, etc.\nAny help will be greatly appreciated.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 26 Apr 2022 18:13:18 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 7:27 AM Amul Sul <sulamul@gmail.com> wrote:\n> Attached is rebase version for the latest maste head(#891624f0ec).\n\nHi Amul,\n\nI'm going through past CF triage emails today; I noticed that this\npatch dropped out of the commitfest when you withdrew it in January,\nbut it hasn't been added back with the most recent patchset you\nposted. Was that intended, or did you want to re-register it for\nreview?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 27 Jul 2022 15:35:44 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jul 28, 2022 at 4:05 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Fri, Apr 8, 2022 at 7:27 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Attached is rebase version for the latest maste head(#891624f0ec).\n>\n> Hi Amul,\n>\n> I'm going through past CF triage emails today; I noticed that this\n> patch dropped out of the commitfest when you withdrew it in January,\n> but it hasn't been added back with the most recent patchset you\n> posted. Was that intended, or did you want to re-register it for\n> review?\n>\n\nYes, there is a plan to re-register it again but not anytime soon,\nonce we start to rework the design.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 28 Jul 2022 09:15:05 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] ALTER SYSTEM READ ONLY"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that there are several file layout assumptions in dbsize.c which might not hold true for non heap relations attached with the TableAm API. It seems logical that in order to retrieve the disk size of a relation, the existing size method to be used instead.\n\nA small patch is included to demonstrate how such an implementation can look like. Also, the existing method for heap, table_block_relation_size, should be able to address the valid cases where a fork number does not exist.\n\nIf this is considered valid, then the same can be applied for indexes too. The more generic calculate_relation_size can be adapted to call into the TableAm for those kinds of relations that makes sense. If agreed, a more complete patch can be provided.\n\nCheers,\n//Georgios",
"msg_date": "Tue, 16 Jun 2020 14:51:31 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Use TableAm API in pg_table_size"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're having an issue with planner performance when doing large deletes at the same time as we have long running transactions, from what we gathered, because of the scan to find the actual minimum and maximum values of the table.\n\nInstead of trying to explain what happens, here is a very simple example:\n\n(Our load is very similar to this scenario)\n\nThe \"test\" table has one column with an index on it.\n\nSession 1:\n=# insert into test select generate_series(1,10000000);\n\nSession 2: do the long running transaction:\n=# begin;\n=# do_whatever_to_get_a_long_running_transaction\n\nSession 1:\n=# delete from test where a>1000000;\n=# analyze test;\n=# explain select * from test where a > 11000000;\n QUERY PLAN \n----------------------------------------------------------------------\n Index Only Scan using idxa on test (cost=0.42..4.44 rows=1 width=4)\n Index Cond: (a > 11000000)\n(2 rows)\n\nTime: 2606,068 ms (00:02,606)\n\nOf course, what happens here is that the histogram says that max(a) is 1000000, and get_actual_variable_range verifies the real upper bound. And has to read quite a few dead index records.\n\nIs there something to do to avoid the problem (except for the long running transaction, which unfortunately is out of our control) ?\n\nRegards",
"msg_date": "Tue, 16 Jun 2020 17:11:55 +0200",
"msg_from": "Marc Cousin <cousinmarc@gmail.com>",
"msg_from_op": true,
"msg_subject": "slow get_actual_variable_range with long running transactions"
},
{
"msg_contents": "Marc Cousin <cousinmarc@gmail.com> writes:\n> Of course, what happens here is that the histogram says that max(a) is 1000000, and get_actual_variable_range verifies the real upper bound. And has to read quite a few dead index records.\n\nWe've revised that logic several times to reduce the scope of the\nproblem. Since you didn't say which PG version you're using exactly,\nit's hard to offer any concrete suggestions. But I fear there's\nnot a lot you can do about it, other than upgrading if you're on a\npre-v11 release.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jun 2020 11:51:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: slow get_actual_variable_range with long running transactions"
},
{
"msg_contents": "Oh, sorry about that, I forgot to detail this. I tested on both 10.13 (which is the production environment on which we faced this), and on 12.3, with the same problem.\n\nOn 16/06/2020 17:51, Tom Lane wrote:\n> Marc Cousin <cousinmarc@gmail.com> writes:\n>> Of course, what happens here is that the histogram says that max(a) is 1000000, and get_actual_variable_range verifies the real upper bound. And has to read quite a few dead index records.\n> \n> We've revised that logic several times to reduce the scope of the\n> problem. Since you didn't say which PG version you're using exactly,\n> it's hard to offer any concrete suggestions. But I fear there's\n> not a lot you can do about it, other than upgrading if you're on a\n> pre-v11 release.\n> \n> \t\t\tregards, tom lane\n>",
"msg_date": "Tue, 16 Jun 2020 18:28:01 +0200",
"msg_from": "Marc Cousin <cousinmarc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: slow get_actual_variable_range with long running transactions"
},
{
"msg_contents": "On 16/06/2020 18:28, Marc Cousin wrote:\n> Oh, sorry about that, I forgot to detail this. I tested on both 10.13 (which is the production environment on which we faced this), and on 12.3, with the same problem.\n> \n> On 16/06/2020 17:51, Tom Lane wrote:\n>> Marc Cousin <cousinmarc@gmail.com> writes:\n>>> Of course, what happens here is that the histogram says that max(a) is 1000000, and get_actual_variable_range verifies the real upper bound. And has to read quite a few dead index records.\n>>\n>> We've revised that logic several times to reduce the scope of the\n>> problem. Since you didn't say which PG version you're using exactly,\n>> it's hard to offer any concrete suggestions. But I fear there's\n>> not a lot you can do about it, other than upgrading if you're on a\n>> pre-v11 release.\n>>\n>> \t\t\tregards, tom lane\n>>\n> \nAs you told me this I did some more tests on PG 12.\n\nThe first attempt is the same (maybe some hints being set or something like that), the second is much faster. So nothing to see here, sorry for the noise.\n\nAnd sorry for the previous top posting :)\n\nRegards",
"msg_date": "Tue, 16 Jun 2020 18:37:17 +0200",
"msg_from": "Marc Cousin <cousinmarc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: slow get_actual_variable_range with long running transactions"
}
] |
[
{
"msg_contents": "Hi,\n\nSticking with precedent for the timing of a Beta 2, the RMT[1] has set\nthe PostgreSQL 13 Beta 2 release date to be June 25, 2020. As such, if\nyou have open items[2] that you can finish by the end of this weekend\n(June 21 AOE), please do so :)\n\nThanks,\n\nJonathan\n\n[1] https://wiki.postgresql.org/wiki/Release_Management_Team\n[2] https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items",
"msg_date": "Tue, 16 Jun 2020 13:49:59 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 13 Beta 2 Release Date"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nWhen I was working on an extension on Windows platform, I used the \ncommand 'vcregress contribcheck' to run the regression test for my \nmodule. However, this command will run the regression test for all the \nmodules, I couldn't find a way to regress test my module only. I think \nit would be better to have such an option to make the development be \nmore efficient (Maybe there is a solution already, but I can't find it).\n\nThe attached patch allow user to run the regression test by using either \n'vcregress contribcheck' or 'vcregress contribcheck postgres_fdw' if you \nwant to test your extension directly, for example, 'postgres_fdw'.\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Tue, 16 Jun 2020 17:54:17 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Add an option to allow run regression test for individual module on\n Windows build"
}
] |
[
{
"msg_contents": "I think someone planned to have XactLogCommitRecord() use its forceSync\nparameter instead of reading the forceSyncCommit global variable, but that\ndidn't happen. I'd like to remove the parameter, as attached. This has no\nfunctional consequences, as detailed in the commit message.",
"msg_date": "Tue, 16 Jun 2020 20:26:15 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Remove dead forceSync parameter of XactLogCommitRecord()"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 08:26:15PM -0700, Noah Misch wrote:\n> I think someone planned to have XactLogCommitRecord() use its forceSync\n> parameter instead of reading the forceSyncCommit global variable, but that\n> didn't happen. I'd like to remove the parameter, as attached. This has no\n> functional consequences, as detailed in the commit message.\n\n+1. Looks like an oversight of 4f1b890b to me.\n--\nMichael",
"msg_date": "Wed, 17 Jun 2020 16:51:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead forceSync parameter of XactLogCommitRecord()"
}
] |
[
{
"msg_contents": "Hi,\n\nAs you may know better than I do, backend processes sometimes use a lot\nof memory because of the various reasons like caches, prepared\nstatements and cursors.\nWhen supporting PostgreSQL, I face situations for investigating the\nreason of memory bloat.\n\nAFAIK, the way to examine it is attaching a debugger and call\nMemoryContextStats(TopMemoryContext), however, I feel some difficulties\ndoing it:\n\n - some production environments don't allow us to run a debugger easily\n - many lines about memory contexts are hard to analyze\n\nUsing an extension(pg_stat_get_memory_context() in pg_cheat_funcs[1]),\nwe can get the view of the memory contexts, but it's the memory contexts\nof the backend which executed the pg_stat_get_memory_context().\n\n\n[user interface]\nIf we have a function exposing memory contexts for specified PID,\nwe can easily examine them.\nI imagine a user interface something like this:\n\n =# SELECT * FROM pg_stat_get_backend_memory_context(PID);\n\n name | parent | level | total_bytes | \ntotal_nblocks | free_bytes | free_chunks | used_bytes | some other \nattibutes..\n--------------------------+--------------------+-------+-------------+---------------+------------+-------------+------------\n TopMemoryContext | | 0 | 68720 | \n 5 | 9936 | 16 | 58784\n TopTransactionContext | TopMemoryContext | 1 | 8192 | \n 1 | 7720 | 0 | 472\n PL/pgSQL function | TopMemoryContext | 1 | 16384 | \n 2 | 5912 | 1 | 10472\n PL/pgSQL function | TopMemoryContext | 1 | 32768 | \n 3 | 15824 | 3 | 16944\n dynahash | TopMemoryContext | 1 | 8192 | \n 1 | 512 | 0 | 7680\n...\n\n\n[rough implementation ideas and challenges]\nI suppose communication between a process which runs\npg_stat_get_backend_memory_context()(referred to as A) and\ntarget backend(referred to as B) is like:\n\n 1. A sends a message to B and order to dump the memory contexts\n 2. B dumps its memory contexts to some shared area\n 3. A reads the shared area and returns it to the function invoker\n\nTo do so, there seem some challenges.\n\n(1) how to share memory contexts information between backend processes\nThe amount of memory contexts greatly varies depending on the\nsituation, so it's not appropriate to share the memory contexts using\nfixed shared memory.\nAlso using the file on 'stats_temp_directory' seems difficult thinking\nabout the background of the shared-memory based stats collector\nproposal[2].\nInstead, I'm thinking about using dsm_mq, which allows messages of\narbitrary length to be sent and receive.\n\n(2) how to send messages wanting memory contexts\nCommunicating via signal seems simple but assigning a specific number\nof signal for this purpose seems wasting.\nI'm thinking about using fixed shared memory to put dsm_mq handle.\nTo send a message, A creates a dsm_mq and put the handle on the shared\nmemory area. When B founds a handle, B dumps the memory contexts to the\ncorresponding dsm_mq.\n\nHowever, enabling B to find the handle needs to check the shared memory\nperiodically. I'm not sure the suitable location and timing for this\nchecking yet, and doubt this way of communication is acceptable because\nit gives certain additional loads to all the backends.\n\n(3) clarifying the necessary attributes\nAs far as reading the past disucussion[3], it's not so clear what kind\nof information should be exposed regarding memory contexts.\n\n\nAs a first step, to deal with (3) I'd like to add\npg_stat_get_backend_memory_context() which target is limited to the\nlocal backend process.\n\n\nThanks for reading and how do you think?\n\n\n[1] \nhttps://github.com/MasaoFujii/pg_cheat_funcs#setof-record-pg_stat_get_memory_context\n[2] \nhttps://www.postgresql.org/message-id/flat/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp\n[3] \nhttps://www.postgresql.org/message-id/20190805171608.g22gxwmfr2r7uf6t%40alap3.anarazel.de\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 17 Jun 2020 22:00:21 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/06/17 22:00, torikoshia wrote:\n> Hi,\n> \n> As you may know better than I do, backend processes sometimes use a lot\n> of memory because of the various reasons like caches, prepared\n> statements and cursors.\n> When supporting PostgreSQL, I face situations for investigating the\n> reason of memory bloat.\n> \n> AFAIK, the way to examine it is attaching a debugger and call\n> MemoryContextStats(TopMemoryContext), however, I feel some difficulties\n> doing it:\n> \n> - some production environments don't allow us to run a debugger easily\n> - many lines about memory contexts are hard to analyze\n\nAgreed. The feature to view how local memory contexts are used in\neach process is very useful!\n\n\n> Using an extension(pg_stat_get_memory_context() in pg_cheat_funcs[1]),\n> we can get the view of the memory contexts, but it's the memory contexts\n> of the backend which executed the pg_stat_get_memory_context().\n> \n> \n> [user interface]\n> If we have a function exposing memory contexts for specified PID,\n> we can easily examine them.\n> I imagine a user interface something like this:\n> \n> =# SELECT * FROM pg_stat_get_backend_memory_context(PID);\n\nI'm afraid that this interface is not convenient when we want to monitor\nthe usages of local memory contexts for all the processes. For example,\nI'd like to monitor how much memory is totally used to store prepared\nstatements information. For that purpose, I wonder if it's more convenient\nto provide the view displaying the memory context usages for\nall the processes.\n\nTo provide that view, all the processes need to save their local memory\ncontext usages into the shared memory or the special files in their\nconvenient timing. For example, backends do that every end of query\nexecution (during waiting for new request from clients). OTOH,\nthe query on the view scans and displays all those information.\n\nOf course there would be several issues in this idea. One issue is\nthe performance overhead caused when each process stores\nits own memory context usage to somewhere. Even if backends do that\nduring waiting for new request from clients, non-negligible overhead\nmight happen. Performance test is necessary. Also this means that\nwe cannot see the memory context usage of the process in the middle of\nquery execution since it's saved at the end of query. If local memory bloat\noccurs only during query execution and we want to investigate it, we still\nneed to use gdb to output the memory context information.\n\nAnother issue is that the large amount of shared memory might be\nnecessary to save the memory context usages for all the proceses. We can\nsave the usage information into the file instead, but which would cause\nmore overhead. If we use shared memory, the similar parameter like\ntrack_activity_query_size might be necessary. That is, backends save\nonly the specified number of memory context information. If it's zero,\nthe feature is disabled.\n\nAlso we should reduce the same of information to save. For example,\ninstead of saving all memory context information like MemoryContextStats()\nprints, it might be better to save the summary stats (per memory context\ntype) from them.\n\n\n> \n> name | parent | level | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes | some other attibutes..\n> --------------------------+--------------------+-------+-------------+---------------+------------+-------------+------------\n> TopMemoryContext | | 0 | 68720 | 5 | 9936 | 16 | 58784\n> TopTransactionContext | TopMemoryContext | 1 | 8192 | 1 | 7720 | 0 | 472\n> PL/pgSQL function | TopMemoryContext | 1 | 16384 | 2 | 5912 | 1 | 10472\n> PL/pgSQL function | TopMemoryContext | 1 | 32768 | 3 | 15824 | 3 | 16944\n> dynahash | TopMemoryContext | 1 | 8192 | 1 | 512 | 0 | 7680\n> ...\n> \n> \n> [rough implementation ideas and challenges]\n> I suppose communication between a process which runs\n> pg_stat_get_backend_memory_context()(referred to as A) and\n> target backend(referred to as B) is like:\n> \n> 1. A sends a message to B and order to dump the memory contexts\n> 2. B dumps its memory contexts to some shared area\n> 3. A reads the shared area and returns it to the function invoker\n> \n> To do so, there seem some challenges.\n> \n> (1) how to share memory contexts information between backend processes\n> The amount of memory contexts greatly varies depending on the\n> situation, so it's not appropriate to share the memory contexts using\n> fixed shared memory.\n> Also using the file on 'stats_temp_directory' seems difficult thinking\n> about the background of the shared-memory based stats collector\n> proposal[2].\n> Instead, I'm thinking about using dsm_mq, which allows messages of\n> arbitrary length to be sent and receive.\n> \n> (2) how to send messages wanting memory contexts\n> Communicating via signal seems simple but assigning a specific number\n> of signal for this purpose seems wasting.\n> I'm thinking about using fixed shared memory to put dsm_mq handle.\n> To send a message, A creates a dsm_mq and put the handle on the shared\n> memory area. When B founds a handle, B dumps the memory contexts to the\n> corresponding dsm_mq.\n> \n> However, enabling B to find the handle needs to check the shared memory\n> periodically. I'm not sure the suitable location and timing for this\n> checking yet, and doubt this way of communication is acceptable because\n> it gives certain additional loads to all the backends.\n> \n> (3) clarifying the necessary attributes\n> As far as reading the past disucussion[3], it's not so clear what kind\n> of information should be exposed regarding memory contexts.\n> \n> \n> As a first step, to deal with (3) I'd like to add\n> pg_stat_get_backend_memory_context() which target is limited to the\n> local backend process.\n\n+1\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:56:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 11:56 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> > As a first step, to deal with (3) I'd like to add\n> > pg_stat_get_backend_memory_context() which target is limited to the\n> > local backend process.\n>\n> +1\n\n+1 from me, too. Something that exposed this via shared memory would\nbe quite useful, and there are other things we'd like to expose\nsimilarly, such as the plan(s) from the running queries, or even just\nthe untruncated query string(s). I'd like to have a good\ninfrastructure for that sort of thing, but I think it's quite tricky\nto do properly. You need a variable-size chunk of shared memory, so\nprobably it has to use DSM somehow, and you need to update it as\nthings change, but if you update it too frequently performance will\nstink. If you ping processes to do the updates, how do you know when\nthey've completed them, and what happens if they don't respond in a\ntimely fashion? These are probably all solvable problems, but I don't\nthink they are very easy ones.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Jun 2020 14:11:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Hi !\n\nOn Thu, Jun 18, 2020 at 12:56 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> Agreed. The feature to view how local memory contexts are used in\n> each process is very useful!\n+1\n\n> > =# SELECT * FROM pg_stat_get_backend_memory_context(PID);\n>\n> I'm afraid that this interface is not convenient when we want to monitor\n> the usages of local memory contexts for all the processes. For example,\n> I'd like to monitor how much memory is totally used to store prepared\n> statements information. For that purpose, I wonder if it's more convenient\n> to provide the view displaying the memory context usages for\n> all the processes.\nHow about separating a function that examines memory consumption\ntrends for all processes and a function that examines memory\nconsumption for a particular phase of a particular process?\n\nFor the former, as Fujii said, the function shows the output limited\ninformation for each context type. All processes calculation and\noutput the information at idle status.\n\nI think the latter is useful for debugging and other purposes.\nFor example, imagine preparing a function for registration like the following.\n=# SELECT pg_stat_get_backend_memory_context_regist (pid, context,\nmax_children, calc_point)\n\npid: A target process\ncontext: The top level of the context of interest\nmax_children: Maximum number of output for the target context's children\n (similar to MemoryContextStatsInternal()'s max_children)\ncalc_point: Single or multiple position(s) to calculate and output\ncontext information\n (Existing hooks such as planner_hook, executor_start, etc.. could be used. )\n\nThis function informs the target PID to output the information of the\nspecified context at the specified calc_point.\nWhen the target PID process reaches the calc_point, it calculates and\noutput the context information one time to a file or DSM.\n\n(Currently PostgreSQL has no formal ways of externally modifying the\nparameters of a particular process, so it may need to be\nimplemented...)\n\nSometimes I want to know the memory usage in the planning phase or\nothers with a query_string and/or plan_tree that before target process\nmove to the idle status.\nSo it would be nice to retrieve memory usage at some arbitrary point in time !\n\nRegards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n",
"msg_date": "Fri, 26 Jun 2020 14:53:00 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Hi,\n\nWhile going through the mail chain on relation, plan and catalogue\ncaching [1], I'm thinking on the lines that is there a way to know the\ncurrent relation, plan and catalogue cache sizes? If there is a way\nalready, please ignore this and it would be grateful if someone point\nme to that.\n\nPosting this here as I felt it's relevant.\n\nIf there is no such way to know the cache sizes and other info such as\nstatistics, number of entries, cache misses, hits etc. can the\napproach discussed here be applied?\n\nIf the user knows the cache statistics and other information, may be\nwe can allow user to take appropriate actions such as allowing him to\ndelete few entries through a command or some other way.\n\nI'm sorry, If I'm diverting the topic being discussed in this mail\nthread, please ignore if it is irrelevant.\n\n[1] - https://www.postgresql.org/message-id/flat/20161219.201505.11562604.horiguchi.kyotaro%40lab.ntt.co.jp\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jun 2020 12:12:31 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jun 26, 2020 at 3:42 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> While going through the mail chain on relation, plan and catalogue\n> caching [1], I'm thinking on the lines that is there a way to know the\n> current relation, plan and catalogue cache sizes? If there is a way\n> already, please ignore this and it would be grateful if someone point\n> me to that.\nAFAIK the only way to get statistics on PostgreSQL's backend internal\nlocal memory usage is to use MemoryContextStats() via gdb to output\nthe information to the log, so far.\n\n> If there is no such way to know the cache sizes and other info such as\n> statistics, number of entries, cache misses, hits etc. can the\n> approach discussed here be applied?\nI think it's partially yes.\n\n> If the user knows the cache statistics and other information, may be\n> we can allow user to take appropriate actions such as allowing him to\n> delete few entries through a command or some other way.\nYeah, one of the purposes of the features we are discussing here is to\nuse them for such situation.\n\nRegards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n",
"msg_date": "Fri, 26 Jun 2020 17:43:49 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-06-20 03:11, Robert Haas wrote:\n> On Wed, Jun 17, 2020 at 11:56 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> > As a first step, to deal with (3) I'd like to add\n>> > pg_stat_get_backend_memory_context() which target is limited to the\n>> > local backend process.\n>> \n>> +1\n> \n> +1 from me, too.\n\nAttached a patch that adds a function exposing memory usage of local\nbackend.\n\nIt's almost same as pg_cheat_funcs's pg_stat_get_memory_context().\nI've also added MemoryContexts identifier because it seems useful to\ndistinguish the same kind of memory contexts.\n\nFor example, when there are many prepared statements we can\ndistinguish them using identifiers, which shows the cached query.\n\n =# SELECT name, ident FROM pg_stat_get_memory_contexts() WHERE name = \n'CachedPlanSource';\n name | ident\n ------------------+--------------------------------\n CachedPlanSource | PREPARE q1(text) AS SELECT ..;\n CachedPlanSource | PREPARE q2(text) AS SELECT ..;\n (2 rows)\n\n\n> Something that exposed this via shared memory would\n> be quite useful, and there are other things we'd like to expose\n> similarly, such as the plan(s) from the running queries, or even just\n> the untruncated query string(s). I'd like to have a good\n> infrastructure for that sort of thing, but I think it's quite tricky\n> to do properly. You need a variable-size chunk of shared memory, so\n> probably it has to use DSM somehow, and you need to update it as\n> things change, but if you update it too frequently performance will\n> stink. If you ping processes to do the updates, how do you know when\n> they've completed them, and what happens if they don't respond in a\n> timely fashion? These are probably all solvable problems, but I don't\n> think they are very easy ones.\n\nThanks for your comments!\n\nIt seems hard as you pointed out.\nI'm going to consider it along with the advice of Fujii and Kasahara.\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Mon, 29 Jun 2020 12:01:55 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/06/29 12:01, torikoshia wrote:\n> On 2020-06-20 03:11, Robert Haas wrote:\n>> On Wed, Jun 17, 2020 at 11:56 PM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> > As a first step, to deal with (3) I'd like to add\n>>> > pg_stat_get_backend_memory_context() which target is limited to the\n>>> > local backend process.\n>>>\n>>> +1\n>>\n>> +1 from me, too.\n> \n> Attached a patch that adds a function exposing memory usage of local\n> backend.\n\nThanks for the patch!\nCould you add this patch to Commitfest 2020-07?\n\n> \n> It's almost same as pg_cheat_funcs's pg_stat_get_memory_context().\n\nThis patch provides only the function, but isn't it convenient to\nprovide the view like pg_shmem_allocations?\n\n\n> I've also added MemoryContexts identifier because it seems useful to\n> distinguish the same kind of memory contexts.\n\nSounds good. But isn't it better to document each column?\nOtherwise, users cannot undersntad what \"ident\" column indicates.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 29 Jun 2020 15:13:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "> > If there is no such way to know the cache sizes and other info such as\n> > statistics, number of entries, cache misses, hits etc. can the\n> > approach discussed here be applied?\n> I think it's partially yes.\n>\n\n> > If the user knows the cache statistics and other information, may be\n> > we can allow user to take appropriate actions such as allowing him to\n> > delete few entries through a command or some other way.\n> Yeah, one of the purposes of the features we are discussing here is to\n> use them for such situation.\n>\n\nThanks Kasahara for the response. I will try to understand more about\ngetting the cache statistics and\nalso will study the possibility of applying this approach being\ndiscussed here in this thread.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jun 2020 12:10:42 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Mon, Jun 29, 2020 at 3:13 PM Fujii Masao \n<masao.fujii@oss.nttdata.com> wrote:\n\n> Could you add this patch to Commitfest 2020-07?\n\nThanks for notifying, I've added it.\nBTW, I registered you as an author because this patch used\nlots of pg_cheat_funcs' codes.\n\n https://commitfest.postgresql.org/28/2622/\n\n> This patch provides only the function, but isn't it convenient to\n> provide the view like pg_shmem_allocations?\n\n> Sounds good. But isn't it better to document each column?\n> Otherwise, users cannot undersntad what \"ident\" column indicates.\n\nAgreed.\nAttached a patch for creating a view for local memory context\nand its explanation on the document.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Wed, 01 Jul 2020 14:48:02 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/07/01 14:48, torikoshia wrote:\n> On Mon, Jun 29, 2020 at 3:13 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>> Could you add this patch to Commitfest 2020-07?\n> \n> Thanks for notifying, I've added it.\n> BTW, I registered you as an author because this patch used\n> lots of pg_cheat_funcs' codes.\n> \n> � https://commitfest.postgresql.org/28/2622/\n\nThanks!\n\n> \n>> This patch provides only the function, but isn't it convenient to\n>> provide the view like pg_shmem_allocations?\n> \n>> Sounds good. But isn't it better to document each column?\n>> Otherwise, users cannot undersntad what \"ident\" column indicates.\n> \n> Agreed.\n> Attached a patch for creating a view for local memory context\n> and its explanation on the document.\n\nThanks for updating the patch!\n\nYou treat pg_stat_local_memory_contexts view as a dynamic statistics view.\nBut isn't it better to treat it as just system view like pg_shmem_allocations\nor pg_prepared_statements because it's not statistics information? If yes,\nwe would need to rename the view, move the documentation from\nmonitoring.sgml to catalogs.sgml, and move the code from pgstat.c to\nthe more appropriate source file.\n\n+\ttupdesc = rsinfo->setDesc;\n+\ttupstore = rsinfo->setResult;\n\nThese seem not to be necessary.\n\n+\t/*\n+\t * It seems preferable to label dynahash contexts with just the hash table\n+\t * name. Those are already unique enough, so the \"dynahash\" part isn't\n+\t * very helpful, and this way is more consistent with pre-v11 practice.\n+\t */\n+\tif (ident && strcmp(name, \"dynahash\") == 0)\n+\t{\n+\t\tname = ident;\n+\t\tident = NULL;\n+\t}\n\nIMO it seems better to report both name and ident even in the case of\ndynahash than report only ident (as name). We can easily understand\nthe context is used for dynahash when it's reported. But you think it's\nbetter to report NULL rather than \"dynahash\"?\n\n+/* ----------\n+ * The max bytes for showing identifiers of MemoryContext.\n+ * ----------\n+ */\n+#define MEMORY_CONTEXT_IDENT_SIZE\t1024\n\nDo we really need this upper size limit? Could you tell me why\nthis is necessary?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 1 Jul 2020 16:43:42 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "> On 1 Jul 2020, at 07:48, torikoshia <torikoshia@oss.nttdata.com> wrote:\n\n> Attached a patch for creating a view for local memory context\n> and its explanation on the document.\n\nFor the next version (if there will be one), please remove the catversion bump\nfrom the patch as it will otherwise just break patch application without\nconstant rebasing (as it's done now). The committer will handle the catversion\nchange if/when it gets committed.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 1 Jul 2020 13:47:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Wed, Jul 1, 2020 at 4:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> \nwrote:\n\nThanks for reviewing!\n\n> You treat pg_stat_local_memory_contexts view as a dynamic statistics \n> view.\n> But isn't it better to treat it as just system view like \n> pg_shmem_allocations\n> or pg_prepared_statements because it's not statistics information? If \n> yes,\n> we would need to rename the view, move the documentation from\n> monitoring.sgml to catalogs.sgml, and move the code from pgstat.c to\n> the more appropriate source file.\n\nAgreed.\nAt first, I thought not only statistical but dynamic information about \nexactly\nwhat is going on was OK when reading the sentence on the manual below.\n\n> PostgreSQL also supports reporting dynamic information about exactly \n> what is going on in the system right now, such as the exact command \n> currently being executed by other server processes, and which other \n> connections exist in the system. This facility is independent of the \n> collector process.\n\nHowever, now I feel something strange because existing pg_stats_* views \nseem\nto be per cluster information but the view I'm adding is about per \nbackend\ninformation.\n\nI'm going to do some renaming and transportations.\n\n- view name: pg_memory_contexts\n- function name: pg_get_memory_contexts()\n- source file: mainly src/backend/utils/mmgr/mcxt.c\n\n\n> + tupdesc = rsinfo->setDesc;\n> + tupstore = rsinfo->setResult;\n> \n> These seem not to be necessary.\n\nThanks!\n\n> + /*\n> + * It seems preferable to label dynahash contexts with just the \n> hash table\n> + * name. Those are already unique enough, so the \"dynahash\" \n> part isn't\n> + * very helpful, and this way is more consistent with pre-v11 \n> practice.\n> + */\n> + if (ident && strcmp(name, \"dynahash\") == 0)\n> + {\n> + name = ident;\n> + ident = NULL;\n> + }\n> \n> IMO it seems better to report both name and ident even in the case of\n> dynahash than report only ident (as name). We can easily understand\n> the context is used for dynahash when it's reported. But you think it's\n> better to report NULL rather than \"dynahash\"?\n\nThese codes come from MemoryContextStatsPrint() and my intension is to\nkeep consistent with it.\n\n> +/* ----------\n> + * The max bytes for showing identifiers of MemoryContext.\n> + * ----------\n> + */\n> +#define MEMORY_CONTEXT_IDENT_SIZE 1024\n> \n> Do we really need this upper size limit? Could you tell me why\n> this is necessary?\n\nIt also derived from MemoryContextStatsPrint().\nI suppose it is for keeping readability of the log..\n\nI'm going to follow the discussion on the mailing list and find why\nthese codes were introduced.\nIf there's no important reason to do the same in our context, I'll\nchange them.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 01 Jul 2020 22:15:08 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-07-01 20:47, Daniel Gustafsson wrote:\n\n> For the next version (if there will be one), please remove the \n> catversion bump\n> from the patch as it will otherwise just break patch application \n> without\n> constant rebasing (as it's done now). The committer will handle the \n> catversion\n> change if/when it gets committed.\n> \n> cheers ./daniel\n\nThanks for telling me it!\nI'll do that way next time.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 01 Jul 2020 22:18:01 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/07/01 22:15, torikoshia wrote:\n> On Wed, Jul 1, 2020 at 4:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> Thanks for reviewing!\n> \n>> You treat pg_stat_local_memory_contexts view as a dynamic statistics view.\n>> But isn't it better to treat it as just system view like pg_shmem_allocations\n>> or pg_prepared_statements because it's not statistics information? If yes,\n>> we would need to rename the view, move the documentation from\n>> monitoring.sgml to catalogs.sgml, and move the code from pgstat.c to\n>> the more appropriate source file.\n> \n> Agreed.\n> At first, I thought not only statistical but dynamic information about exactly\n> what is going on was OK when reading the sentence on the manual below.\n> \n>> PostgreSQL also supports reporting dynamic information about exactly what is going on in the system right now, such as the exact command currently being executed by other server processes, and which other connections exist in the system. This facility is independent of the collector process.\n> \n> However, now I feel something strange because existing pg_stats_* views seem\n> to be per cluster information but the view I'm adding is about per backend\n> information.\n> \n> I'm going to do some renaming and transportations.\n> \n> - view name: pg_memory_contexts\n> - function name: pg_get_memory_contexts()\n> - source file: mainly src/backend/utils/mmgr/mcxt.c\n> \n> \n>> + tupdesc = rsinfo->setDesc;\n>> + tupstore = rsinfo->setResult;\n>>\n>> These seem not to be necessary.\n> \n> Thanks!\n> \n>> + /*\n>> + * It seems preferable to label dynahash contexts with just the hash table\n>> + * name. Those are already unique enough, so the \"dynahash\" part isn't\n>> + * very helpful, and this way is more consistent with pre-v11 practice.\n>> + */\n>> + if (ident && strcmp(name, \"dynahash\") == 0)\n>> + {\n>> + name = ident;\n>> + ident = NULL;\n>> + }\n>>\n>> IMO it seems better to report both name and ident even in the case of\n>> dynahash than report only ident (as name). We can easily understand\n>> the context is used for dynahash when it's reported. But you think it's\n>> better to report NULL rather than \"dynahash\"?\n> \n> These codes come from MemoryContextStatsPrint() and my intension is to\n> keep consistent with it.\n\nOk, understood! I agree that it's strange to display different names\nfor the same memory context between this view and logging.\n\nIt's helpful if the comment there refers to MemoryContextStatsPrint()\nand mentions the reason that you told.\n\n\n> \n>> +/* ----------\n>> + * The max bytes for showing identifiers of MemoryContext.\n>> + * ----------\n>> + */\n>> +#define MEMORY_CONTEXT_IDENT_SIZE 1024\n>>\n>> Do we really need this upper size limit? Could you tell me why\n>> this is necessary?\n> \n> It also derived from MemoryContextStatsPrint().\n> I suppose it is for keeping readability of the log..\n\nUnderstood. I may want to change the upper limit of query size to display.\nBut at the first step, I'm fine with limitting 1024 bytes.\n\n\n> \n> I'm going to follow the discussion on the mailing list and find why\n> these codes were introduced.\n\nhttps://www.postgresql.org/message-id/12319.1521999065%40sss.pgh.pa.us\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 1 Jul 2020 22:58:23 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Wed, Jul 1, 2020 at 10:15 PM torikoshia <torikoshia@oss.nttdata.com> \nwrote:\n> I'm going to do some renaming and transportations.\n> \n> - view name: pg_memory_contexts\n> - function name: pg_get_memory_contexts()\n> - source file: mainly src/backend/utils/mmgr/mcxt.c\n\nAttached an updated patch.\n\nOn Wed, Jul 1, 2020 at 10:58 PM Fujii Masao \n<masao.fujii@oss.nttdata.com> wrote:\n> Ok, understood! I agree that it's strange to display different names\n> for the same memory context between this view and logging.\n> \n> It's helpful if the comment there refers to MemoryContextStatsPrint()\n> and mentions the reason that you told.\n\nAgreed. I changed the comments.\n\n> > It also derived from MemoryContextStatsPrint().\n> > I suppose it is for keeping readability of the log..\n> \n> Understood. I may want to change the upper limit of query size to \n> display.\n> But at the first step, I'm fine with limitting 1024 bytes.\n\nThanks, I've left it as it was.\n\n> > I'm going to follow the discussion on the mailing list and find why\n> > these codes were introduced.\n> \n> https://www.postgresql.org/message-id/12319.1521999065%40sss.pgh.pa.us\n\nThanks for sharing!\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Fri, 03 Jul 2020 11:45:52 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/07/03 11:45, torikoshia wrote:\n> On Wed, Jul 1, 2020 at 10:15 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>> I'm going to do some renaming and transportations.\n>>\n>> - view name: pg_memory_contexts\n\nI like more specific name like pg_backend_memory_contexts.\nBut I'd like to hear more opinions about the name from others.\n\n\n>> - function name: pg_get_memory_contexts()\n>> - source file: mainly src/backend/utils/mmgr/mcxt.c\n> \n> Attached an updated patch.\n\nThanks for updating the patch!\n\n+ <structfield>level</structfield> <type>integer</type>\n\nIn catalog.sgml, \"int4\" and \"int8\" are used in other catalogs tables.\nSo \"integer\" in the above should be \"int4\"?\n\n+ <structfield>total_bytes</structfield> <type>bigint</type>\n\n\"bigint\" should be \"int8\"?\n\n+ Identification information of the memory context. This field is truncated if the identification field is longer than 1024 characters\n\n\"characters\" should be \"bytes\"?\n\nIt's a bit confusing to have both \"This field\" and \"the identification field\"\nin one description. What about just \"This field is truncated at 1024 bytes\"?\n\n+ <para>\n+ Total bytes requested from malloc\n\nIsn't it better not to use \"malloc\" in the description? For example,\nwhat about something like \"Total bytes allocated for this memory context\"?\n\n+#define PG_STAT_GET_MEMORY_CONTEXT_COLS \t9\n\nIsn't it better to rename this to PG_GET_MEMORY_CONTEXTS_COLS\nfor the consistency with the function name?\n\n+\tmemset(nulls, 0, sizeof(nulls));\n\n\"values[]\" also should be initialized with zero?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 3 Jul 2020 19:33:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Fri, Jul 3, 2020 at 7:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> \nwrote:\n\nThanks for your review!\n\n> I like more specific name like pg_backend_memory_contexts.\n\nAgreed.\n\nWhen I was trying to add this function as statistics function,\nI thought that naming pg_stat_getbackend_memory_context()\nmight make people regarded it as a \"per-backend statistics\nfunction\", whose parameter is backend ID number.\nSo I removed \"backend\", but now there is no necessity to do\nso.\n\n> But I'd like to hear more opinions about the name from others.\n\nI changed the name to pg_backend_memory_contexts for the time\nbeing.\n\n\n>> - function name: pg_get_memory_contexts()\n>> - source file: mainly src/backend/utils/mmgr/mcxt.c\n\n\n>> + Identification information of the memory context. This field \n>> is truncated if the identification field is longer than 1024 \n>> characters\n> \n> \"characters\" should be \"bytes\"?\n\nFixed, but I used \"characters\" while referring to the\ndescriptions on the manual of pg_stat_activity.query\nbelow.\n\n| By default the query text is truncated at 1024 characters;\n\nIt has nothing to do with this thread, but considering\nmultibyte characters, it also may be better to change it\nto \"bytes\".\n\n\nRegarding the other comments, I revised the patch as you pointed.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Mon, 06 Jul 2020 12:12:18 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/07/06 12:12, torikoshia wrote:\n> On Fri, Jul 3, 2020 at 7:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> Thanks for your review!\n> \n>> I like more specific name like pg_backend_memory_contexts.\n> \n> Agreed.\n> \n> When I was trying to add this function as statistics function,\n> I thought that naming pg_stat_getbackend_memory_context()\n> might make people regarded it as a \"per-backend statistics\n> function\", whose parameter is backend ID number.\n> So I removed \"backend\", but now there is no necessity to do\n> so.\n> \n>> But I'd like to hear more opinions about the name from others.\n> \n> I changed the name to pg_backend_memory_contexts for the time\n> being.\n\n+1\n\n\n>>> - function name: pg_get_memory_contexts()\n>>> - source file: mainly src/backend/utils/mmgr/mcxt.c\n> \n> \n>>> + Identification information of the memory context. This field is truncated if the identification field is longer than 1024 characters\n>>\n>> \"characters\" should be \"bytes\"?\n> \n> Fixed, but I used \"characters\" while referring to the\n> descriptions on the manual of pg_stat_activity.query\n> below.\n> \n> | By default the query text is truncated at 1024 characters;\n> \n> It has nothing to do with this thread, but considering\n> multibyte characters, it also may be better to change it\n> to \"bytes\".\n\nYeah, I agree we should write the separate patch fixing that. You will?\nIf not, I will do that later.\n\n\n> Regarding the other comments, I revised the patch as you pointed.\n\nThanks for updating the patch! The patch basically looks good to me/\nHere are some minor comments:\n\n+#define MEMORY_CONTEXT_IDENT_SIZE\t1024\n\nThis macro varible name sounds like the maximum allowed length of ident that\neach menory context has. But actually this limits the maximum bytes of ident\nto display. So I think that it's better to rename this macro to something like\nMEMORY_CONTEXT_IDENT_DISPLAY_SIZE. Thought?\n\n+#define PG_GET_MEMORY_CONTEXTS_COLS\t9\n+\tDatum\t\tvalues[PG_GET_MEMORY_CONTEXTS_COLS];\n+\tbool\t\tnulls[PG_GET_MEMORY_CONTEXTS_COLS];\n\nThis macro variable name should be PG_GET_BACKEND_MEMORY_CONTEXTS_COLS\nfor the consistency with the function name?\n\n+{ oid => '2282', descr => 'statistics: information about all memory contexts of local backend',\n\nIsn't it better to remove \"statistics: \" from the above description?\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>parent</structfield> <type>text</type>\n\nThere can be multiple memory contexts with the same name. So I'm afraid\nthat it's difficult to identify the actual parent memory context from this\n\"parent\" column. This is ok when logging memory contexts by calling\nMemoryContextStats() via gdb. Because child memory contexts are printed\njust under their parent, with indents. But this doesn't work in the view.\nTo identify the actual parent memory or calculate the memory contexts tree\nfrom the view, we might need to assign unique ID to each memory context\nand display it. But IMO this is overkill. So I'm fine with current \"parent\"\ncolumn. Thought? Do you have any better idea?\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 6 Jul 2020 15:16:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-07-06 15:16, Fujii Masao wrote:\n> On 2020/07/06 12:12, torikoshia wrote:\n>> On Fri, Jul 3, 2020 at 7:33 PM Fujii Masao \n>> <masao.fujii@oss.nttdata.com> wrote:\n>> \n>> Thanks for your review!\n>> \n>>> I like more specific name like pg_backend_memory_contexts.\n>> \n>> Agreed.\n>> \n>> When I was trying to add this function as statistics function,\n>> I thought that naming pg_stat_getbackend_memory_context()\n>> might make people regarded it as a \"per-backend statistics\n>> function\", whose parameter is backend ID number.\n>> So I removed \"backend\", but now there is no necessity to do\n>> so.\n>> \n>>> But I'd like to hear more opinions about the name from others.\n>> \n>> I changed the name to pg_backend_memory_contexts for the time\n>> being.\n> \n> +1\n> \n> \n>>>> - function name: pg_get_memory_contexts()\n>>>> - source file: mainly src/backend/utils/mmgr/mcxt.c\n>> \n>> \n>>>> + Identification information of the memory context. This field \n>>>> is truncated if the identification field is longer than 1024 \n>>>> characters\n>>> \n>>> \"characters\" should be \"bytes\"?\n>> \n>> Fixed, but I used \"characters\" while referring to the\n>> descriptions on the manual of pg_stat_activity.query\n>> below.\n>> \n>> | By default the query text is truncated at 1024 characters;\n>> \n>> It has nothing to do with this thread, but considering\n>> multibyte characters, it also may be better to change it\n>> to \"bytes\".\n> \n> Yeah, I agree we should write the separate patch fixing that. You will?\n> If not, I will do that later.\n\nThanks, I will try it!\n\n>> Regarding the other comments, I revised the patch as you pointed.\n> \n> Thanks for updating the patch! The patch basically looks good to me/\n> Here are some minor comments:\n> \n> +#define MEMORY_CONTEXT_IDENT_SIZE\t1024\n> \n> This macro varible name sounds like the maximum allowed length of ident \n> that\n> each menory context has. But actually this limits the maximum bytes of \n> ident\n> to display. So I think that it's better to rename this macro to \n> something like\n> MEMORY_CONTEXT_IDENT_DISPLAY_SIZE. Thought?\n\nAgreed.\nMEMORY_CONTEXT_IDENT_DISPLAY_SIZE seems more accurate.\n\n> +#define PG_GET_MEMORY_CONTEXTS_COLS\t9\n> +\tDatum\t\tvalues[PG_GET_MEMORY_CONTEXTS_COLS];\n> +\tbool\t\tnulls[PG_GET_MEMORY_CONTEXTS_COLS];\n> \n> This macro variable name should be PG_GET_BACKEND_MEMORY_CONTEXTS_COLS\n> for the consistency with the function name?\n\nThanks! Fixed it.\n\n> \n> +{ oid => '2282', descr => 'statistics: information about all memory\n> contexts of local backend',\n> \n> Isn't it better to remove \"statistics: \" from the above description?\n\nYeah, it's my oversight.\n\n> \n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>parent</structfield> <type>text</type>\n> \n> There can be multiple memory contexts with the same name. So I'm afraid\n> that it's difficult to identify the actual parent memory context from \n> this\n> \"parent\" column. This is ok when logging memory contexts by calling\n> MemoryContextStats() via gdb. Because child memory contexts are printed\n> just under their parent, with indents. But this doesn't work in the \n> view.\n> To identify the actual parent memory or calculate the memory contexts \n> tree\n> from the view, we might need to assign unique ID to each memory context\n> and display it. But IMO this is overkill. So I'm fine with current \n> \"parent\"\n> column. Thought? Do you have any better idea?\n\nIndeed.\nI also feel it's not usual to assign a unique ID, which\ncan vary every time the view displayed.\n\nWe show each context using a recursive function and this is\na kind of depth-first search. So, as far as I understand,\nwe can identify its parent using both the \"parent\" column\nand the order of the rows.\n\nDocumenting these things may worth for who want to identify\nthe relation between parents and children.\n\nOf course, in the relational model, the order of relation\ndoes not have meaning so it's also unusual in this sense..\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Tue, 07 Jul 2020 22:02:10 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/07/07 22:02, torikoshia wrote:\n> On 2020-07-06 15:16, Fujii Masao wrote:\n>> On 2020/07/06 12:12, torikoshia wrote:\n>>> On Fri, Jul 3, 2020 at 7:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>> Thanks for your review!\n>>>\n>>>> I like more specific name like pg_backend_memory_contexts.\n>>>\n>>> Agreed.\n>>>\n>>> When I was trying to add this function as statistics function,\n>>> I thought that naming pg_stat_getbackend_memory_context()\n>>> might make people regarded it as a \"per-backend statistics\n>>> function\", whose parameter is backend ID number.\n>>> So I removed \"backend\", but now there is no necessity to do\n>>> so.\n>>>\n>>>> But I'd like to hear more opinions about the name from others.\n>>>\n>>> I changed the name to pg_backend_memory_contexts for the time\n>>> being.\n>>\n>> +1\n>>\n>>\n>>>>> - function name: pg_get_memory_contexts()\n>>>>> - source file: mainly src/backend/utils/mmgr/mcxt.c\n>>>\n>>>\n>>>>> + Identification information of the memory context. This field is truncated if the identification field is longer than 1024 characters\n>>>>\n>>>> \"characters\" should be \"bytes\"?\n>>>\n>>> Fixed, but I used \"characters\" while referring to the\n>>> descriptions on the manual of pg_stat_activity.query\n>>> below.\n>>>\n>>> | By default the query text is truncated at 1024 characters;\n>>>\n>>> It has nothing to do with this thread, but considering\n>>> multibyte characters, it also may be better to change it\n>>> to \"bytes\".\n>>\n>> Yeah, I agree we should write the separate patch fixing that. You will?\n>> If not, I will do that later.\n> \n> Thanks, I will try it!\n\nThanks!\n\n\n> \n>>> Regarding the other comments, I revised the patch as you pointed.\n>>\n>> Thanks for updating the patch! The patch basically looks good to me/\n>> Here are some minor comments:\n>>\n>> +#define MEMORY_CONTEXT_IDENT_SIZE 1024\n>>\n>> This macro varible name sounds like the maximum allowed length of ident that\n>> each menory context has. But actually this limits the maximum bytes of ident\n>> to display. So I think that it's better to rename this macro to something like\n>> MEMORY_CONTEXT_IDENT_DISPLAY_SIZE. Thought?\n> \n> Agreed.\n> MEMORY_CONTEXT_IDENT_DISPLAY_SIZE seems more accurate.\n> \n>> +#define PG_GET_MEMORY_CONTEXTS_COLS 9\n>> + Datum values[PG_GET_MEMORY_CONTEXTS_COLS];\n>> + bool nulls[PG_GET_MEMORY_CONTEXTS_COLS];\n>>\n>> This macro variable name should be PG_GET_BACKEND_MEMORY_CONTEXTS_COLS\n>> for the consistency with the function name?\n> \n> Thanks! Fixed it.\n\nThanks for updating the patch! It basically looks good to me.\n\n+ <indexterm zone=\"view-pg-backend-memory-contexts\">\n+ <primary>backend memory contexts</primary>\n+ </indexterm>\n\nDo we need this indexterm?\n\n\n> \n>>\n>> +{ oid => '2282', descr => 'statistics: information about all memory\n>> contexts of local backend',\n>>\n>> Isn't it better to remove \"statistics: \" from the above description?\n> \n> Yeah, it's my oversight.\n> \n>>\n>> + <row>\n>> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> + <structfield>parent</structfield> <type>text</type>\n>>\n>> There can be multiple memory contexts with the same name. So I'm afraid\n>> that it's difficult to identify the actual parent memory context from this\n>> \"parent\" column. This is ok when logging memory contexts by calling\n>> MemoryContextStats() via gdb. Because child memory contexts are printed\n>> just under their parent, with indents. But this doesn't work in the view.\n>> To identify the actual parent memory or calculate the memory contexts tree\n>> from the view, we might need to assign unique ID to each memory context\n>> and display it. But IMO this is overkill. So I'm fine with current \"parent\"\n>> column. Thought? Do you have any better idea?\n> \n> Indeed.\n> I also feel it's not usual to assign a unique ID, which\n> can vary every time the view displayed.\n\nAgreed. Displaying such ID would be more confusing to users.\nOk, let's leave the code as it is.\n\nAnother comment about parent column is: dynahash can be parent?\nIf yes, its indent instead of name should be displayed in parent column?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 8 Jul 2020 22:12:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Hi,\n\nI think this is an incredibly useful feature.\n\n\nOn 2020-07-07 22:02:10 +0900, torikoshia wrote:\n> > There can be multiple memory contexts with the same name. So I'm afraid\n> > that it's difficult to identify the actual parent memory context from\n> > this\n> > \"parent\" column. This is ok when logging memory contexts by calling\n> > MemoryContextStats() via gdb. Because child memory contexts are printed\n> > just under their parent, with indents. But this doesn't work in the\n> > view.\n> > To identify the actual parent memory or calculate the memory contexts\n> > tree\n> > from the view, we might need to assign unique ID to each memory context\n> > and display it. But IMO this is overkill. So I'm fine with current\n> > \"parent\"\n> > column. Thought? Do you have any better idea?\n> \n> Indeed.\n> I also feel it's not usual to assign a unique ID, which\n> can vary every time the view displayed.\n\nHm. I wonder if we just could include the address of the context\nitself. There might be reasons not to do so (e.g. security concerns\nabout leaked pointers making attacks easier), but I think it's worth\nconsidering.\n\n\n> We show each context using a recursive function and this is\n> a kind of depth-first search. So, as far as I understand,\n> we can identify its parent using both the \"parent\" column\n> and the order of the rows.\n> \n> Documenting these things may worth for who want to identify\n> the relation between parents and children.\n> \n> Of course, in the relational model, the order of relation\n> does not have meaning so it's also unusual in this sense..\n\nIt makes it pretty hard to write summarizing queries, so I am not a huge\nfan of just relying on the order.\n\n\n> +/*\n> + * PutMemoryContextsStatsTupleStore\n> + *\t\tOne recursion level for pg_get_backend_memory_contexts.\n> + */\n> +static void\n> +PutMemoryContextsStatsTupleStore(Tuplestorestate *tupstore,\n> +\t\t\t\t\t\t\t\tTupleDesc tupdesc, MemoryContext context,\n> +\t\t\t\t\t\t\t\tMemoryContext parent, int level)\n> +{\n> +#define PG_GET_BACKEND_MEMORY_CONTEXTS_COLS\t9\n> +\tDatum\t\tvalues[PG_GET_BACKEND_MEMORY_CONTEXTS_COLS];\n> +\tbool\t\tnulls[PG_GET_BACKEND_MEMORY_CONTEXTS_COLS];\n> +\tMemoryContextCounters stat;\n> +\tMemoryContext child;\n> +\tconst char *name = context->name;\n> +\tconst char *ident = context->ident;\n> +\n> +\tif (context == NULL)\n> +\t\treturn;\n> +\n> +\t/*\n> +\t * To be consistent with logging output, we label dynahash contexts\n> +\t * with just the hash table name as with MemoryContextStatsPrint().\n> +\t */\n> +\tif (ident && strcmp(name, \"dynahash\") == 0)\n> +\t{\n> +\t\tname = ident;\n> +\t\tident = NULL;\n> +\t}\n> +\n> +\t/* Examine the context itself */\n> +\tmemset(&stat, 0, sizeof(stat));\n> +\t(*context->methods->stats) (context, NULL, (void *) &level, &stat);\n> +\n> +\tmemset(values, 0, sizeof(values));\n> +\tmemset(nulls, 0, sizeof(nulls));\n> +\n> +\tvalues[0] = CStringGetTextDatum(name);\n> +\n> +\tif (ident)\n> +\t{\n> +\t\tint\t\tidlen = strlen(ident);\n> +\t\tchar\t\tclipped_ident[MEMORY_CONTEXT_IDENT_DISPLAY_SIZE];\n> +\n> +\t\t/*\n> +\t\t * Some identifiers such as SQL query string can be very long,\n> +\t\t * truncate oversize identifiers.\n> +\t\t */\n> +\t\tif (idlen >= MEMORY_CONTEXT_IDENT_DISPLAY_SIZE)\n> +\t\t\tidlen = pg_mbcliplen(ident, idlen, MEMORY_CONTEXT_IDENT_DISPLAY_SIZE - 1);\n> +\n\nWhy?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Jul 2020 10:03:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-07-08 22:12, Fujii Masao wrote:\n> Thanks for updating the patch! It basically looks good to me.\n> \n> + <indexterm zone=\"view-pg-backend-memory-contexts\">\n> + <primary>backend memory contexts</primary>\n> + </indexterm>\n> \n> Do we need this indexterm?\n\nThanks! it's not necessary. I remove this indexterm.\n\n\n>>> \n>>> +{ oid => '2282', descr => 'statistics: information about all memory\n>>> contexts of local backend',\n>>> \n>>> Isn't it better to remove \"statistics: \" from the above description?\n>> \n>> Yeah, it's my oversight.\n>> \n>>> \n>>> + <row>\n>>> + <entry role=\"catalog_table_entry\"><para \n>>> role=\"column_definition\">\n>>> + <structfield>parent</structfield> <type>text</type>\n>>> \n>>> There can be multiple memory contexts with the same name. So I'm \n>>> afraid\n>>> that it's difficult to identify the actual parent memory context from \n>>> this\n>>> \"parent\" column. This is ok when logging memory contexts by calling\n>>> MemoryContextStats() via gdb. Because child memory contexts are \n>>> printed\n>>> just under their parent, with indents. But this doesn't work in the \n>>> view.\n>>> To identify the actual parent memory or calculate the memory contexts \n>>> tree\n>>> from the view, we might need to assign unique ID to each memory \n>>> context\n>>> and display it. But IMO this is overkill. So I'm fine with current \n>>> \"parent\"\n>>> column. Thought? Do you have any better idea?\n>> \n>> Indeed.\n>> I also feel it's not usual to assign a unique ID, which\n>> can vary every time the view displayed.\n> \n> Agreed. Displaying such ID would be more confusing to users.\n> Ok, let's leave the code as it is.\n> \n> Another comment about parent column is: dynahash can be parent?\n> If yes, its indent instead of name should be displayed in parent \n> column?\n\nI'm not sure yet, but considering the changes in the future, it seems\nbetter to do so.\n\nBut if we add information for identifying parent-child relation like the\nmemory address suggested from Andres, it seems not necessary.\n\nSo I'd like to go back to this point.\n\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Fri, 10 Jul 2020 17:30:22 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-07-09 02:03, Andres Freund wrote:\n> Hi,\n> \n> I think this is an incredibly useful feature.\n\nThanks for your kind comments and suggestion!\n\n\n> On 2020-07-07 22:02:10 +0900, torikoshia wrote:\n>> > There can be multiple memory contexts with the same name. So I'm afraid\n>> > that it's difficult to identify the actual parent memory context from\n>> > this\n>> > \"parent\" column. This is ok when logging memory contexts by calling\n>> > MemoryContextStats() via gdb. Because child memory contexts are printed\n>> > just under their parent, with indents. But this doesn't work in the\n>> > view.\n>> > To identify the actual parent memory or calculate the memory contexts\n>> > tree\n>> > from the view, we might need to assign unique ID to each memory context\n>> > and display it. But IMO this is overkill. So I'm fine with current\n>> > \"parent\"\n>> > column. Thought? Do you have any better idea?\n>> \n>> Indeed.\n>> I also feel it's not usual to assign a unique ID, which\n>> can vary every time the view displayed.\n> \n> Hm. I wonder if we just could include the address of the context\n> itself. There might be reasons not to do so (e.g. security concerns\n> about leaked pointers making attacks easier), but I think it's worth\n> considering.\n\n\nI tried exposing addresses of each context and their parent.\nAttached a poc patch.\n\n =# SELECT name, address, parent_address, total_bytes FROM \npg_backend_memory_contexts ;\n\n name | address | parent_address | total_bytes\n --------------------------+-----------+----------------+-------------\n TopMemoryContext | 0x1280da0 | | 80800\n TopTransactionContext | 0x1309040 | 0x1280da0 | 8192\n Prepared Queries | 0x138a480 | 0x1280da0 | 16384\n Type information cache | 0x134b8c0 | 0x1280da0 | 24624\n ...\n CacheMemoryContext | 0x12cb390 | 0x1280da0 | 1048576\n CachedPlanSource | 0x13c47f0 | 0x12cb390 | 4096\n CachedPlanQuery | 0x13c9ae0 | 0x13c47f0 | 4096\n CachedPlanSource | 0x13c7310 | 0x12cb390 | 4096\n CachedPlanQuery | 0x13c1230 | 0x13c7310 | 4096\n ...\n\n\nNow it's possible to identify the actual parent memory context even when\nthere are multiple memory contexts with the same name.\n\nI'm not sure, but I'm also worrying about this might incur some security\nrelated problems..\n\nI'd like to hear more opinions about:\n\n- whether information for identifying parent-child relation is necessary \nor it's an overkill\n- if this information is necessary, memory address is suitable or other \nmeans like assigning unique numbers are required\n\n\n>> +/*\n>> + * PutMemoryContextsStatsTupleStore\n>> + *\t\tOne recursion level for pg_get_backend_memory_contexts.\n>> + */\n>> +static void\n>> +PutMemoryContextsStatsTupleStore(Tuplestorestate *tupstore,\n>> +\t\t\t\t\t\t\t\tTupleDesc tupdesc, MemoryContext context,\n>> +\t\t\t\t\t\t\t\tMemoryContext parent, int level)\n>> +{\n>> +#define PG_GET_BACKEND_MEMORY_CONTEXTS_COLS\t9\n>> +\tDatum\t\tvalues[PG_GET_BACKEND_MEMORY_CONTEXTS_COLS];\n>> +\tbool\t\tnulls[PG_GET_BACKEND_MEMORY_CONTEXTS_COLS];\n>> +\tMemoryContextCounters stat;\n>> +\tMemoryContext child;\n>> +\tconst char *name = context->name;\n>> +\tconst char *ident = context->ident;\n>> +\n>> +\tif (context == NULL)\n>> +\t\treturn;\n>> +\n>> +\t/*\n>> +\t * To be consistent with logging output, we label dynahash contexts\n>> +\t * with just the hash table name as with MemoryContextStatsPrint().\n>> +\t */\n>> +\tif (ident && strcmp(name, \"dynahash\") == 0)\n>> +\t{\n>> +\t\tname = ident;\n>> +\t\tident = NULL;\n>> +\t}\n>> +\n>> +\t/* Examine the context itself */\n>> +\tmemset(&stat, 0, sizeof(stat));\n>> +\t(*context->methods->stats) (context, NULL, (void *) &level, &stat);\n>> +\n>> +\tmemset(values, 0, sizeof(values));\n>> +\tmemset(nulls, 0, sizeof(nulls));\n>> +\n>> +\tvalues[0] = CStringGetTextDatum(name);\n>> +\n>> +\tif (ident)\n>> +\t{\n>> +\t\tint\t\tidlen = strlen(ident);\n>> +\t\tchar\t\tclipped_ident[MEMORY_CONTEXT_IDENT_DISPLAY_SIZE];\n>> +\n>> +\t\t/*\n>> +\t\t * Some identifiers such as SQL query string can be very long,\n>> +\t\t * truncate oversize identifiers.\n>> +\t\t */\n>> +\t\tif (idlen >= MEMORY_CONTEXT_IDENT_DISPLAY_SIZE)\n>> +\t\t\tidlen = pg_mbcliplen(ident, idlen, \n>> MEMORY_CONTEXT_IDENT_DISPLAY_SIZE - 1);\n>> +\n> \n> Why?\n\nAs described below[1], too long messages caused problems in the past and \nnow\nMemoryContextStatsPrint() truncates ident, so I decided to truncate it \nalso\nhere.\n\nDo you think it's not necessary here?\n\n[1] https://www.postgresql.org/message-id/12319.1521999065@sss.pgh.pa.us\n\n\nRegards,\n\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Fri, 10 Jul 2020 17:32:23 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/07/10 17:32, torikoshia wrote:\n> On 2020-07-09 02:03, Andres Freund wrote:\n>> Hi,\n>>\n>> I think this is an incredibly useful feature.\n> \n> Thanks for your kind comments and suggestion!\n> \n> \n>> On 2020-07-07 22:02:10 +0900, torikoshia wrote:\n>>> > There can be multiple memory contexts with the same name. So I'm afraid\n>>> > that it's difficult to identify the actual parent memory context from\n>>> > this\n>>> > \"parent\" column. This is ok when logging memory contexts by calling\n>>> > MemoryContextStats() via gdb. Because child memory contexts are printed\n>>> > just under their parent, with indents. But this doesn't work in the\n>>> > view.\n>>> > To identify the actual parent memory or calculate the memory contexts\n>>> > tree\n>>> > from the view, we might need to assign unique ID to each memory context\n>>> > and display it. But IMO this is overkill. So I'm fine with current\n>>> > \"parent\"\n>>> > column. Thought? Do you have any better idea?\n>>>\n>>> Indeed.\n>>> I also feel it's not usual to assign a unique ID, which\n>>> can vary every time the view displayed.\n>>\n>> Hm. I wonder if we just could include the address of the context\n>> itself. There might be reasons not to do so (e.g. security concerns\n>> about leaked pointers making attacks easier), but I think it's worth\n>> considering.\n> \n> \n> I tried exposing addresses of each context and their parent.\n> Attached a poc patch.\n> \n> � =# SELECT name, address, parent_address, total_bytes FROM pg_backend_memory_contexts ;\n> \n> ���������� name���������� |� address� | parent_address | total_bytes\n> � --------------------------+-----------+----------------+-------------\n> �� TopMemoryContext�������� | 0x1280da0 |��������������� |������ 80800\n> �� TopTransactionContext��� | 0x1309040 | 0x1280da0����� |������� 8192\n> �� Prepared Queries�������� | 0x138a480 | 0x1280da0����� |������ 16384\n> �� Type information cache�� | 0x134b8c0 | 0x1280da0����� |������ 24624\n> �� ...\n> �� CacheMemoryContext������ | 0x12cb390 | 0x1280da0����� |���� 1048576\n> �� CachedPlanSource�������� | 0x13c47f0 | 0x12cb390����� |������� 4096\n> �� CachedPlanQuery��������� | 0x13c9ae0 | 0x13c47f0����� |������� 4096\n> �� CachedPlanSource�������� | 0x13c7310 | 0x12cb390����� |������� 4096\n> �� CachedPlanQuery��������� | 0x13c1230 | 0x13c7310����� |������� 4096\n> �� ...\n> \n> \n> Now it's possible to identify the actual parent memory context even when\n> there are multiple memory contexts with the same name.\n> \n> I'm not sure, but I'm also worrying about this might incur some security\n> related problems..\n> \n> I'd like to hear more opinions about:\n> \n> - whether information for identifying parent-child relation is necessary or it's an overkill\n> - if this information is necessary, memory address is suitable or other means like assigning unique numbers are required\n\nTo consider this, I'd like to know what security issue can actually\nhappen when memory addresses are exposed. I have no idea about this..\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 13 Jul 2020 13:00:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jul 10, 2020 at 5:32 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> - whether information for identifying parent-child relation is necessary\n> or it's an overkill\nI think it's important to understand the parent-child relationship of\nthe context.\nPersonally, I often want to know the following two things ..\n\n- In which life cycle is the target context? (Remaining as long as the\nprocess is living? per query?)\n- Does the target context belong to the correct (parent) context?\n\n> - if this information is necessary, memory address is suitable or other\n> means like assigning unique numbers are required\nIMO, If each context can be uniquely identified (or easily guessed) by\n\"name\" and \"ident\",\nthen I don't think the address information is necessary.\nInstead, I like the way that directly shows the context name of the\nparent, as in the 0005 patch.\n\nBest regards\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n",
"msg_date": "Thu, 30 Jul 2020 15:13:51 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-07-30 15:13, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> On Fri, Jul 10, 2020 at 5:32 PM torikoshia <torikoshia@oss.nttdata.com> \n> wrote:\n>> - whether information for identifying parent-child relation is \n>> necessary\n>> or it's an overkill\n> I think it's important to understand the parent-child relationship of\n> the context.\n> Personally, I often want to know the following two things ..\n> \n> - In which life cycle is the target context? (Remaining as long as the\n> process is living? per query?)\n> - Does the target context belong to the correct (parent) context?\n> \n>> - if this information is necessary, memory address is suitable or \n>> other\n>> means like assigning unique numbers are required\n> IMO, If each context can be uniquely identified (or easily guessed) by\n> \"name\" and \"ident\",\n> then I don't think the address information is necessary.\n> Instead, I like the way that directly shows the context name of the\n> parent, as in the 0005 patch.\n\nThanks for your opinion!\n\nI also feel it'll be sufficient to know not the exact memory context\nof the parent but the name of the parent context.\n\nAnd as Fujii-san told me in person, exposing memory address seems\nnot preferable considering there are security techniques like\naddress space layout randomization.\n\n\n\n> On 2020-07-10 08:30:22 +0900, torikoshia wrote:\n>> On 2020-07-08 22:12, Fujii Masao wrote:\n\n>> Another comment about parent column is: dynahash can be parent?\n>> If yes, its indent instead of name should be displayed in parent\n>> column?\n\n> I'm not sure yet, but considering the changes in the future, it seems\n> better to do so.\n\nAttached a patch which displays ident as parent when dynahash is a\nparent.\n\nI could not find the case when dynahash can be a parent so I tested it\nusing attached test purposed patch.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Fri, 31 Jul 2020 17:24:44 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Fri, Jul 31, 2020 at 4:25 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> And as Fujii-san told me in person, exposing memory address seems\n> not preferable considering there are security techniques like\n> address space layout randomization.\n\nYeah, exactly. ASLR wouldn't do anything to improve security if there\nwere no other security bugs, but there are, and some of those bugs are\nharder to exploit if you don't know the precise memory addresses of\ncertain data structures. Similarly, exposing the addresses of our\ninternal data structures is harmless if we have no other security\nbugs, but if we do, it might make those bugs easier to exploit. I\ndon't think this information is useful enough to justify taking that\nrisk.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 31 Jul 2020 15:23:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nI tested the latest patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch) \r\nwith the latest PG-version (199cec9779504c08aaa8159c6308283156547409) and test was passed.\r\nIt looks good to me.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Fri, 07 Aug 2020 07:38:13 +0000",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Fri, Jul 31, 2020 at 03:23:52PM -0400, Robert Haas wrote:\n> On Fri, Jul 31, 2020 at 4:25 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>> And as Fujii-san told me in person, exposing memory address seems\n>> not preferable considering there are security techniques like\n>> address space layout randomization.\n> \n> Yeah, exactly. ASLR wouldn't do anything to improve security if there\n> were no other security bugs, but there are, and some of those bugs are\n> harder to exploit if you don't know the precise memory addresses of\n> certain data structures. Similarly, exposing the addresses of our\n> internal data structures is harmless if we have no other security\n> bugs, but if we do, it might make those bugs easier to exploit. I\n> don't think this information is useful enough to justify taking that\n> risk.\n\nFWIW, this is the class of issues where it is possible to print some\nareas of memory, or even manipulate the stack so as it was possible to\npass down a custom pointer, so exposing the pointer locations is a\nreal risk, and this has happened in the past. Anyway, it seems to me\nthat if this part is done, we could just make it superuser-only with\nrestrictive REVOKE privileges, but I am not sure that we have enough\nuser cases to justify this addition.\n--\nMichael",
"msg_date": "Sat, 8 Aug 2020 10:44:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-08-08 10:44, Michael Paquier wrote:\n> On Fri, Jul 31, 2020 at 03:23:52PM -0400, Robert Haas wrote:\n>> On Fri, Jul 31, 2020 at 4:25 AM torikoshia \n>> <torikoshia@oss.nttdata.com> wrote:\n>>> And as Fujii-san told me in person, exposing memory address seems\n>>> not preferable considering there are security techniques like\n>>> address space layout randomization.\n>> \n>> Yeah, exactly. ASLR wouldn't do anything to improve security if there\n>> were no other security bugs, but there are, and some of those bugs are\n>> harder to exploit if you don't know the precise memory addresses of\n>> certain data structures. Similarly, exposing the addresses of our\n>> internal data structures is harmless if we have no other security\n>> bugs, but if we do, it might make those bugs easier to exploit. I\n>> don't think this information is useful enough to justify taking that\n>> risk.\n> \n> FWIW, this is the class of issues where it is possible to print some\n> areas of memory, or even manipulate the stack so as it was possible to\n> pass down a custom pointer, so exposing the pointer locations is a\n> real risk, and this has happened in the past. Anyway, it seems to me\n> that if this part is done, we could just make it superuser-only with\n> restrictive REVOKE privileges, but I am not sure that we have enough\n> user cases to justify this addition.\n\n\nThanks for your comments!\n\nI convinced that exposing pointer locations introduce security risks\nand it seems better not to do so.\n\nAnd I now feel identifying exact memory context by exposing memory\naddress or other means seems overkill.\nShowing just the context name of the parent would be sufficient and\n0007 pattch takes this way.\n\n\nOn 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n> The following review has been posted through the commitfest \n> application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: tested, passed\n> \n> I tested the latest\n> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n> with the latest PG-version (199cec9779504c08aaa8159c6308283156547409)\n> and test was passed.\n> It looks good to me.\n> \n> The new status of this patch is: Ready for Committer\n\nThanks for your testing!\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 11 Aug 2020 15:24:52 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/11 15:24, torikoshia wrote:\n> On 2020-08-08 10:44, Michael Paquier wrote:\n>> On Fri, Jul 31, 2020 at 03:23:52PM -0400, Robert Haas wrote:\n>>> On Fri, Jul 31, 2020 at 4:25 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>>>> And as Fujii-san told me in person, exposing memory address seems\n>>>> not preferable considering there are security techniques like\n>>>> address space layout randomization.\n>>>\n>>> Yeah, exactly. ASLR wouldn't do anything to improve security if there\n>>> were no other security bugs, but there are, and some of those bugs are\n>>> harder to exploit if you don't know the precise memory addresses of\n>>> certain data structures. Similarly, exposing the addresses of our\n>>> internal data structures is harmless if we have no other security\n>>> bugs, but if we do, it might make those bugs easier to exploit. I\n>>> don't think this information is useful enough to justify taking that\n>>> risk.\n>>\n>> FWIW, this is the class of issues where it is possible to print some\n>> areas of memory, or even manipulate the stack so as it was possible to\n>> pass down a custom pointer, so exposing the pointer locations is a\n>> real risk, and this has happened in the past. Anyway, it seems to me\n>> that if this part is done, we could just make it superuser-only with\n>> restrictive REVOKE privileges, but I am not sure that we have enough\n>> user cases to justify this addition.\n> \n> \n> Thanks for your comments!\n> \n> I convinced that exposing pointer locations introduce security risks\n> and it seems better not to do so.\n> \n> And I now feel identifying exact memory context by exposing memory\n> address or other means seems overkill.\n> Showing just the context name of the parent would be sufficient and\n> 0007 pattch takes this way.\n\nAgreed.\n\n\n> \n> \n> On 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n>> The following review has been posted through the commitfest application:\n>> make installcheck-world: tested, passed\n>> Implements feature: tested, passed\n>> Spec compliant: not tested\n>> Documentation: tested, passed\n>>\n>> I tested the latest\n>> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n>> with the latest PG-version (199cec9779504c08aaa8159c6308283156547409)\n>> and test was passed.\n>> It looks good to me.\n>>\n>> The new status of this patch is: Ready for Committer\n> \n> Thanks for your testing!\n\nThanks for updating the patch! Here are the review comments.\n\n+ <row>\n+ <entry><link linkend=\"view-pg-backend-memory-contexts\"><structname>pg_backend_memory_contexts</structname></link></entry>\n+ <entry>backend memory contexts</entry>\n+ </row>\n\nThe above is located just after pg_matviews entry. But it should be located\njust after pg_available_extension_versions entry. Because the rows in the table\n\"System Views\" should be located in alphabetical order.\n\n\n+ <sect1 id=\"view-pg-backend-memory-contexts\">\n+ <title><structname>pg_backend_memory_contexts</structname></title>\n\nSame as above.\n\n\n+ The view <structname>pg_backend_memory_contexts</structname> displays all\n+ the local backend memory contexts.\n\nThis description seems a bit confusing because maybe we can interpret this\nas \"... displays the memory contexts of all the local backends\" wrongly. Thought?\nWhat about the following description, instead?\n\n The view <structname>pg_backend_memory_contexts</structname> displays all\n the memory contexts of the server process attached to the current session.\n\n\n+\tconst char *name = context->name;\n+\tconst char *ident = context->ident;\n+\n+\tif (context == NULL)\n+\t\treturn;\n\nThe above check \"context == NULL\" is useless? If \"context\" is actually NULL,\n\"context->name\" would cause segmentation fault, so ISTM that the check\nwill never be performed.\n\nIf \"context\" can be NULL, the check should be performed before accessing\nto \"contect\". OTOH, if \"context\" must not be NULL per the specification of\nPutMemoryContextStatsTupleStore(), assertion test checking\n\"context != NULL\" should be used here, instead?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 17 Aug 2020 21:14:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/17 21:14, Fujii Masao wrote:\n> \n> \n> On 2020/08/11 15:24, torikoshia wrote:\n>> On 2020-08-08 10:44, Michael Paquier wrote:\n>>> On Fri, Jul 31, 2020 at 03:23:52PM -0400, Robert Haas wrote:\n>>>> On Fri, Jul 31, 2020 at 4:25 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>>>>> And as Fujii-san told me in person, exposing memory address seems\n>>>>> not preferable considering there are security techniques like\n>>>>> address space layout randomization.\n>>>>\n>>>> Yeah, exactly. ASLR wouldn't do anything to improve security if there\n>>>> were no other security bugs, but there are, and some of those bugs are\n>>>> harder to exploit if you don't know the precise memory addresses of\n>>>> certain data structures. Similarly, exposing the addresses of our\n>>>> internal data structures is harmless if we have no other security\n>>>> bugs, but if we do, it might make those bugs easier to exploit. I\n>>>> don't think this information is useful enough to justify taking that\n>>>> risk.\n>>>\n>>> FWIW, this is the class of issues where it is possible to print some\n>>> areas of memory, or even manipulate the stack so as it was possible to\n>>> pass down a custom pointer, so exposing the pointer locations is a\n>>> real risk, and this has happened in the past. Anyway, it seems to me\n>>> that if this part is done, we could just make it superuser-only with\n>>> restrictive REVOKE privileges, but I am not sure that we have enough\n>>> user cases to justify this addition.\n>>\n>>\n>> Thanks for your comments!\n>>\n>> I convinced that exposing pointer locations introduce security risks\n>> and it seems better not to do so.\n>>\n>> And I now feel identifying exact memory context by exposing memory\n>> address or other means seems overkill.\n>> Showing just the context name of the parent would be sufficient and\n>> 0007 pattch takes this way.\n> \n> Agreed.\n> \n> \n>>\n>>\n>> On 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n>>> The following review has been posted through the commitfest application:\n>>> make installcheck-world: tested, passed\n>>> Implements feature: tested, passed\n>>> Spec compliant: not tested\n>>> Documentation: tested, passed\n>>>\n>>> I tested the latest\n>>> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n>>> with the latest PG-version (199cec9779504c08aaa8159c6308283156547409)\n>>> and test was passed.\n>>> It looks good to me.\n>>>\n>>> The new status of this patch is: Ready for Committer\n>>\n>> Thanks for your testing!\n> \n> Thanks for updating the patch! Here are the review comments.\n> \n> + <row>\n> + <entry><link linkend=\"view-pg-backend-memory-contexts\"><structname>pg_backend_memory_contexts</structname></link></entry>\n> + <entry>backend memory contexts</entry>\n> + </row>\n> \n> The above is located just after pg_matviews entry. But it should be located\n> just after pg_available_extension_versions entry. Because the rows in the table\n> \"System Views\" should be located in alphabetical order.\n> \n> \n> + <sect1 id=\"view-pg-backend-memory-contexts\">\n> + <title><structname>pg_backend_memory_contexts</structname></title>\n> \n> Same as above.\n> \n> \n> + The view <structname>pg_backend_memory_contexts</structname> displays all\n> + the local backend memory contexts.\n> \n> This description seems a bit confusing because maybe we can interpret this\n> as \"... displays the memory contexts of all the local backends\" wrongly. Thought?\n> What about the following description, instead?\n> \n> The view <structname>pg_backend_memory_contexts</structname> displays all\n> the memory contexts of the server process attached to the current session.\n> \n> \n> + const char *name = context->name;\n> + const char *ident = context->ident;\n> +\n> + if (context == NULL)\n> + return;\n> \n> The above check \"context == NULL\" is useless? If \"context\" is actually NULL,\n> \"context->name\" would cause segmentation fault, so ISTM that the check\n> will never be performed.\n> \n> If \"context\" can be NULL, the check should be performed before accessing\n> to \"contect\". OTOH, if \"context\" must not be NULL per the specification of\n> PutMemoryContextStatsTupleStore(), assertion test checking\n> \"context != NULL\" should be used here, instead?\n\nHere is another comment.\n\n+\tif (parent == NULL)\n+\t\tnulls[2] = true;\n+\telse\n+\t\t/*\n+\t\t * We labeled dynahash contexts with just the hash table name.\n+\t\t * To make it possible to identify its parent, we also display\n+\t\t * parent's ident here.\n+\t\t */\n+\t\tif (parent->ident && strcmp(parent->name, \"dynahash\") == 0)\n+\t\t\tvalues[2] = CStringGetTextDatum(parent->ident);\n+\t\telse\n+\t\t\tvalues[2] = CStringGetTextDatum(parent->name);\n\nPutMemoryContextsStatsTupleStore() doesn't need \"parent\" memory context,\nbut uses only the name of \"parent\" memory context. So isn't it better to use\n\"const char *parent\" instead of \"MemoryContext parent\", as the argument of\nthe function? If we do that, we can simplify the above code.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 17 Aug 2020 21:19:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-08-17 21:19, Fujii Masao wrote:\n> On 2020/08/17 21:14, Fujii Masao wrote:\n>>> On 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n>>>> The following review has been posted through the commitfest \n>>>> application:\n>>>> make installcheck-world: tested, passed\n>>>> Implements feature: tested, passed\n>>>> Spec compliant: not tested\n>>>> Documentation: tested, passed\n>>>> \n>>>> I tested the latest\n>>>> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n>>>> with the latest PG-version \n>>>> (199cec9779504c08aaa8159c6308283156547409)\n>>>> and test was passed.\n>>>> It looks good to me.\n>>>> \n>>>> The new status of this patch is: Ready for Committer\n>>> \n>>> Thanks for your testing!\n>> \n>> Thanks for updating the patch! Here are the review comments.\n\nThanks for reviewing!\n\n>> \n>> + <row>\n>> + <entry><link \n>> linkend=\"view-pg-backend-memory-contexts\"><structname>pg_backend_memory_contexts</structname></link></entry>\n>> + <entry>backend memory contexts</entry>\n>> + </row>\n>> \n>> The above is located just after pg_matviews entry. But it should be \n>> located\n>> just after pg_available_extension_versions entry. Because the rows in \n>> the table\n>> \"System Views\" should be located in alphabetical order.\n>> \n>> \n>> + <sect1 id=\"view-pg-backend-memory-contexts\">\n>> + <title><structname>pg_backend_memory_contexts</structname></title>\n>> \n>> Same as above.\n\nModified both.\n\n>> \n>> \n>> + The view <structname>pg_backend_memory_contexts</structname> \n>> displays all\n>> + the local backend memory contexts.\n>> \n>> This description seems a bit confusing because maybe we can interpret \n>> this\n>> as \"... displays the memory contexts of all the local backends\" \n>> wrongly. Thought?\n>> What about the following description, instead?\n\n>> The view <structname>pg_backend_memory_contexts</structname> \n>> displays all\n>> the memory contexts of the server process attached to the current \n>> session.\n\nThanks! it seems better.\n\n>> + const char *name = context->name;\n>> + const char *ident = context->ident;\n>> +\n>> + if (context == NULL)\n>> + return;\n>> \n>> The above check \"context == NULL\" is useless? If \"context\" is actually \n>> NULL,\n>> \"context->name\" would cause segmentation fault, so ISTM that the check\n>> will never be performed.\n>> \n>> If \"context\" can be NULL, the check should be performed before \n>> accessing\n>> to \"contect\". OTOH, if \"context\" must not be NULL per the \n>> specification of\n>> PutMemoryContextStatsTupleStore(), assertion test checking\n>> \"context != NULL\" should be used here, instead?\n\nYeah, \"context\" cannot be NULL because \"context\" must be \nTopMemoryContext\nor it is already checked as not NULL as follows(child != NULL).\n\nI added the assertion check.\n\n| for (child = context->firstchild; child != NULL; child = \nchild->nextchild)\n| {\n| ...\n| PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n| child, \nparentname, level + 1);\n| }\n\n> Here is another comment.\n> \n> + if (parent == NULL)\n> + nulls[2] = true;\n> + else\n> + /*\n> + * We labeled dynahash contexts with just the hash table \n> name.\n> + * To make it possible to identify its parent, we also \n> display\n> + * parent's ident here.\n> + */\n> + if (parent->ident && strcmp(parent->name, \"dynahash\") == \n> 0)\n> + values[2] = CStringGetTextDatum(parent->ident);\n> + else\n> + values[2] = CStringGetTextDatum(parent->name);\n> \n> PutMemoryContextsStatsTupleStore() doesn't need \"parent\" memory \n> context,\n> but uses only the name of \"parent\" memory context. So isn't it better \n> to use\n> \"const char *parent\" instead of \"MemoryContext parent\", as the argument \n> of\n> the function? If we do that, we can simplify the above code.\n\nThanks, the attached patch adopted the advice.\n\nHowever, since PutMemoryContextsStatsTupleStore() used not only the name\nbut also the ident of the \"parent\", I could not help but adding similar\ncodes before calling the function.\nThe total amount of codes and complexity seem not to change so much.\n\nAny thoughts? Am I misunderstanding something?\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Tue, 18 Aug 2020 18:41:47 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/18 18:41, torikoshia wrote:\n> On 2020-08-17 21:19, Fujii Masao wrote:\n>> On 2020/08/17 21:14, Fujii Masao wrote:\n>>>> On 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n>>>>> The following review has been posted through the commitfest application:\n>>>>> make installcheck-world: tested, passed\n>>>>> Implements feature: tested, passed\n>>>>> Spec compliant: not tested\n>>>>> Documentation: tested, passed\n>>>>>\n>>>>> I tested the latest\n>>>>> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n>>>>> with the latest PG-version (199cec9779504c08aaa8159c6308283156547409)\n>>>>> and test was passed.\n>>>>> It looks good to me.\n>>>>>\n>>>>> The new status of this patch is: Ready for Committer\n>>>>\n>>>> Thanks for your testing!\n>>>\n>>> Thanks for updating the patch! Here are the review comments.\n> \n> Thanks for reviewing!\n> \n>>>\n>>> + <row>\n>>> + <entry><link linkend=\"view-pg-backend-memory-contexts\"><structname>pg_backend_memory_contexts</structname></link></entry>\n>>> + <entry>backend memory contexts</entry>\n>>> + </row>\n>>>\n>>> The above is located just after pg_matviews entry. But it should be located\n>>> just after pg_available_extension_versions entry. Because the rows in the table\n>>> \"System Views\" should be located in alphabetical order.\n>>>\n>>>\n>>> + <sect1 id=\"view-pg-backend-memory-contexts\">\n>>> + <title><structname>pg_backend_memory_contexts</structname></title>\n>>>\n>>> Same as above.\n> \n> Modified both.\n> \n>>>\n>>>\n>>> + The view <structname>pg_backend_memory_contexts</structname> displays all\n>>> + the local backend memory contexts.\n>>>\n>>> This description seems a bit confusing because maybe we can interpret this\n>>> as \"... displays the memory contexts of all the local backends\" wrongly. Thought?\n>>> What about the following description, instead?\n> \n>>> The view <structname>pg_backend_memory_contexts</structname> displays all\n>>> the memory contexts of the server process attached to the current session.\n> \n> Thanks! it seems better.\n> \n>>> + const char *name = context->name;\n>>> + const char *ident = context->ident;\n>>> +\n>>> + if (context == NULL)\n>>> + return;\n>>>\n>>> The above check \"context == NULL\" is useless? If \"context\" is actually NULL,\n>>> \"context->name\" would cause segmentation fault, so ISTM that the check\n>>> will never be performed.\n>>>\n>>> If \"context\" can be NULL, the check should be performed before accessing\n>>> to \"contect\". OTOH, if \"context\" must not be NULL per the specification of\n>>> PutMemoryContextStatsTupleStore(), assertion test checking\n>>> \"context != NULL\" should be used here, instead?\n> \n> Yeah, \"context\" cannot be NULL because \"context\" must be TopMemoryContext\n> or it is already checked as not NULL as follows(child != NULL).\n> \n> I added the assertion check.\n\nIsn't it better to add AssertArg(MemoryContextIsValid(context)), instead?\n\n\n> \n> | for (child = context->firstchild; child != NULL; child = child->nextchild)\n> | {\n> | ...\n> | PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n> | child, parentname, level + 1);\n> | }\n> \n>> Here is another comment.\n>>\n>> + if (parent == NULL)\n>> + nulls[2] = true;\n>> + else\n>> + /*\n>> + * We labeled dynahash contexts with just the hash table name.\n>> + * To make it possible to identify its parent, we also display\n>> + * parent's ident here.\n>> + */\n>> + if (parent->ident && strcmp(parent->name, \"dynahash\") == 0)\n>> + values[2] = CStringGetTextDatum(parent->ident);\n>> + else\n>> + values[2] = CStringGetTextDatum(parent->name);\n>>\n>> PutMemoryContextsStatsTupleStore() doesn't need \"parent\" memory context,\n>> but uses only the name of \"parent\" memory context. So isn't it better to use\n>> \"const char *parent\" instead of \"MemoryContext parent\", as the argument of\n>> the function? If we do that, we can simplify the above code.\n> \n> Thanks, the attached patch adopted the advice.\n> \n> However, since PutMemoryContextsStatsTupleStore() used not only the name\n> but also the ident of the \"parent\", I could not help but adding similar\n> codes before calling the function.\n> The total amount of codes and complexity seem not to change so much.\n> \n> Any thoughts? Am I misunderstanding something?\n\nI was thinking that we can simplify the code as follows.\nThat is, we can just pass \"name\" as the argument of PutMemoryContextsStatsTupleStore()\nsince \"name\" indicates context->name or ident (if name is \"dynahash\").\n\n \tfor (child = context->firstchild; child != NULL; child = child->nextchild)\n \t{\n-\t\tconst char *parentname;\n-\n-\t\t/*\n-\t\t * We labeled dynahash contexts with just the hash table name.\n-\t\t * To make it possible to identify its parent, we also use\n-\t\t * the hash table as its context name.\n-\t\t */\n-\t\tif (context->ident && strcmp(context->name, \"dynahash\") == 0)\n-\t\t\tparentname = context->ident;\n-\t\telse\n-\t\t\tparentname = context->name;\n-\n \t\tPutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n-\t\t\t\t\t\t\t\t child, parentname, level + 1);\n+\t\t\t\t\t\t\t\t child, name, level + 1);\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 18 Aug 2020 22:54:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-08-18 22:54, Fujii Masao wrote:\n> On 2020/08/18 18:41, torikoshia wrote:\n>> On 2020-08-17 21:19, Fujii Masao wrote:\n>>> On 2020/08/17 21:14, Fujii Masao wrote:\n>>>>> On 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n>>>>>> The following review has been posted through the commitfest \n>>>>>> application:\n>>>>>> make installcheck-world: tested, passed\n>>>>>> Implements feature: tested, passed\n>>>>>> Spec compliant: not tested\n>>>>>> Documentation: tested, passed\n>>>>>> \n>>>>>> I tested the latest\n>>>>>> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n>>>>>> with the latest PG-version \n>>>>>> (199cec9779504c08aaa8159c6308283156547409)\n>>>>>> and test was passed.\n>>>>>> It looks good to me.\n>>>>>> \n>>>>>> The new status of this patch is: Ready for Committer\n>>>>> \n>>>>> Thanks for your testing!\n>>>> \n>>>> Thanks for updating the patch! Here are the review comments.\n>> \n>> Thanks for reviewing!\n>> \n>>>> \n>>>> + <row>\n>>>> + <entry><link \n>>>> linkend=\"view-pg-backend-memory-contexts\"><structname>pg_backend_memory_contexts</structname></link></entry>\n>>>> + <entry>backend memory contexts</entry>\n>>>> + </row>\n>>>> \n>>>> The above is located just after pg_matviews entry. But it should be \n>>>> located\n>>>> just after pg_available_extension_versions entry. Because the rows \n>>>> in the table\n>>>> \"System Views\" should be located in alphabetical order.\n>>>> \n>>>> \n>>>> + <sect1 id=\"view-pg-backend-memory-contexts\">\n>>>> + \n>>>> <title><structname>pg_backend_memory_contexts</structname></title>\n>>>> \n>>>> Same as above.\n>> \n>> Modified both.\n>> \n>>>> \n>>>> \n>>>> + The view <structname>pg_backend_memory_contexts</structname> \n>>>> displays all\n>>>> + the local backend memory contexts.\n>>>> \n>>>> This description seems a bit confusing because maybe we can \n>>>> interpret this\n>>>> as \"... displays the memory contexts of all the local backends\" \n>>>> wrongly. Thought?\n>>>> What about the following description, instead?\n>> \n>>>> The view <structname>pg_backend_memory_contexts</structname> \n>>>> displays all\n>>>> the memory contexts of the server process attached to the \n>>>> current session.\n>> \n>> Thanks! it seems better.\n>> \n>>>> + const char *name = context->name;\n>>>> + const char *ident = context->ident;\n>>>> +\n>>>> + if (context == NULL)\n>>>> + return;\n>>>> \n>>>> The above check \"context == NULL\" is useless? If \"context\" is \n>>>> actually NULL,\n>>>> \"context->name\" would cause segmentation fault, so ISTM that the \n>>>> check\n>>>> will never be performed.\n>>>> \n>>>> If \"context\" can be NULL, the check should be performed before \n>>>> accessing\n>>>> to \"contect\". OTOH, if \"context\" must not be NULL per the \n>>>> specification of\n>>>> PutMemoryContextStatsTupleStore(), assertion test checking\n>>>> \"context != NULL\" should be used here, instead?\n>> \n>> Yeah, \"context\" cannot be NULL because \"context\" must be \n>> TopMemoryContext\n>> or it is already checked as not NULL as follows(child != NULL).\n>> \n>> I added the assertion check.\n> \n> Isn't it better to add AssertArg(MemoryContextIsValid(context)), \n> instead?\n\nThanks, that's better.\n\n>> \n>> | for (child = context->firstchild; child != NULL; child = \n>> child->nextchild)\n>> | {\n>> | ...\n>> | PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n>> | child, \n>> parentname, level + 1);\n>> | }\n>> \n>>> Here is another comment.\n>>> \n>>> + if (parent == NULL)\n>>> + nulls[2] = true;\n>>> + else\n>>> + /*\n>>> + * We labeled dynahash contexts with just the hash \n>>> table name.\n>>> + * To make it possible to identify its parent, we also \n>>> display\n>>> + * parent's ident here.\n>>> + */\n>>> + if (parent->ident && strcmp(parent->name, \"dynahash\") \n>>> == 0)\n>>> + values[2] = CStringGetTextDatum(parent->ident);\n>>> + else\n>>> + values[2] = CStringGetTextDatum(parent->name);\n>>> \n>>> PutMemoryContextsStatsTupleStore() doesn't need \"parent\" memory \n>>> context,\n>>> but uses only the name of \"parent\" memory context. So isn't it better \n>>> to use\n>>> \"const char *parent\" instead of \"MemoryContext parent\", as the \n>>> argument of\n>>> the function? If we do that, we can simplify the above code.\n>> \n>> Thanks, the attached patch adopted the advice.\n>> \n>> However, since PutMemoryContextsStatsTupleStore() used not only the \n>> name\n>> but also the ident of the \"parent\", I could not help but adding \n>> similar\n>> codes before calling the function.\n>> The total amount of codes and complexity seem not to change so much.\n>> \n>> Any thoughts? Am I misunderstanding something?\n> \n> I was thinking that we can simplify the code as follows.\n> That is, we can just pass \"name\" as the argument of\n> PutMemoryContextsStatsTupleStore()\n> since \"name\" indicates context->name or ident (if name is \"dynahash\").\n> \n> \tfor (child = context->firstchild; child != NULL; child = \n> child->nextchild)\n> \t{\n> -\t\tconst char *parentname;\n> -\n> -\t\t/*\n> -\t\t * We labeled dynahash contexts with just the hash table name.\n> -\t\t * To make it possible to identify its parent, we also use\n> -\t\t * the hash table as its context name.\n> -\t\t */\n> -\t\tif (context->ident && strcmp(context->name, \"dynahash\") == 0)\n> -\t\t\tparentname = context->ident;\n> -\t\telse\n> -\t\t\tparentname = context->name;\n> -\n> \t\tPutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n> -\t\t\t\t\t\t\t\t child, parentname, level + 1);\n> +\t\t\t\t\t\t\t\t child, name, level + 1);\n\nI got it, thanks for the clarification!\n\nAttached a revised patch.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Wed, 19 Aug 2020 09:43:50 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/19 9:43, torikoshia wrote:\n> On 2020-08-18 22:54, Fujii Masao wrote:\n>> On 2020/08/18 18:41, torikoshia wrote:\n>>> On 2020-08-17 21:19, Fujii Masao wrote:\n>>>> On 2020/08/17 21:14, Fujii Masao wrote:\n>>>>>> On 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n>>>>>>> The following review has been posted through the commitfest application:\n>>>>>>> make installcheck-world: tested, passed\n>>>>>>> Implements feature: tested, passed\n>>>>>>> Spec compliant: not tested\n>>>>>>> Documentation: tested, passed\n>>>>>>>\n>>>>>>> I tested the latest\n>>>>>>> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n>>>>>>> with the latest PG-version (199cec9779504c08aaa8159c6308283156547409)\n>>>>>>> and test was passed.\n>>>>>>> It looks good to me.\n>>>>>>>\n>>>>>>> The new status of this patch is: Ready for Committer\n>>>>>>\n>>>>>> Thanks for your testing!\n>>>>>\n>>>>> Thanks for updating the patch! Here are the review comments.\n>>>\n>>> Thanks for reviewing!\n>>>\n>>>>>\n>>>>> + <row>\n>>>>> + <entry><link linkend=\"view-pg-backend-memory-contexts\"><structname>pg_backend_memory_contexts</structname></link></entry>\n>>>>> + <entry>backend memory contexts</entry>\n>>>>> + </row>\n>>>>>\n>>>>> The above is located just after pg_matviews entry. But it should be located\n>>>>> just after pg_available_extension_versions entry. Because the rows in the table\n>>>>> \"System Views\" should be located in alphabetical order.\n>>>>>\n>>>>>\n>>>>> + <sect1 id=\"view-pg-backend-memory-contexts\">\n>>>>> + <title><structname>pg_backend_memory_contexts</structname></title>\n>>>>>\n>>>>> Same as above.\n>>>\n>>> Modified both.\n>>>\n>>>>>\n>>>>>\n>>>>> + The view <structname>pg_backend_memory_contexts</structname> displays all\n>>>>> + the local backend memory contexts.\n>>>>>\n>>>>> This description seems a bit confusing because maybe we can interpret this\n>>>>> as \"... displays the memory contexts of all the local backends\" wrongly. Thought?\n>>>>> What about the following description, instead?\n>>>\n>>>>> The view <structname>pg_backend_memory_contexts</structname> displays all\n>>>>> the memory contexts of the server process attached to the current session.\n>>>\n>>> Thanks! it seems better.\n>>>\n>>>>> + const char *name = context->name;\n>>>>> + const char *ident = context->ident;\n>>>>> +\n>>>>> + if (context == NULL)\n>>>>> + return;\n>>>>>\n>>>>> The above check \"context == NULL\" is useless? If \"context\" is actually NULL,\n>>>>> \"context->name\" would cause segmentation fault, so ISTM that the check\n>>>>> will never be performed.\n>>>>>\n>>>>> If \"context\" can be NULL, the check should be performed before accessing\n>>>>> to \"contect\". OTOH, if \"context\" must not be NULL per the specification of\n>>>>> PutMemoryContextStatsTupleStore(), assertion test checking\n>>>>> \"context != NULL\" should be used here, instead?\n>>>\n>>> Yeah, \"context\" cannot be NULL because \"context\" must be TopMemoryContext\n>>> or it is already checked as not NULL as follows(child != NULL).\n>>>\n>>> I added the assertion check.\n>>\n>> Isn't it better to add AssertArg(MemoryContextIsValid(context)), instead?\n> \n> Thanks, that's better.\n> \n>>>\n>>> | for (child = context->firstchild; child != NULL; child = child->nextchild)\n>>> | {\n>>> | ...\n>>> | PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n>>> | child, parentname, level + 1);\n>>> | }\n>>>\n>>>> Here is another comment.\n>>>>\n>>>> + if (parent == NULL)\n>>>> + nulls[2] = true;\n>>>> + else\n>>>> + /*\n>>>> + * We labeled dynahash contexts with just the hash table name.\n>>>> + * To make it possible to identify its parent, we also display\n>>>> + * parent's ident here.\n>>>> + */\n>>>> + if (parent->ident && strcmp(parent->name, \"dynahash\") == 0)\n>>>> + values[2] = CStringGetTextDatum(parent->ident);\n>>>> + else\n>>>> + values[2] = CStringGetTextDatum(parent->name);\n>>>>\n>>>> PutMemoryContextsStatsTupleStore() doesn't need \"parent\" memory context,\n>>>> but uses only the name of \"parent\" memory context. So isn't it better to use\n>>>> \"const char *parent\" instead of \"MemoryContext parent\", as the argument of\n>>>> the function? If we do that, we can simplify the above code.\n>>>\n>>> Thanks, the attached patch adopted the advice.\n>>>\n>>> However, since PutMemoryContextsStatsTupleStore() used not only the name\n>>> but also the ident of the \"parent\", I could not help but adding similar\n>>> codes before calling the function.\n>>> The total amount of codes and complexity seem not to change so much.\n>>>\n>>> Any thoughts? Am I misunderstanding something?\n>>\n>> I was thinking that we can simplify the code as follows.\n>> That is, we can just pass \"name\" as the argument of\n>> PutMemoryContextsStatsTupleStore()\n>> since \"name\" indicates context->name or ident (if name is \"dynahash\").\n>>\n>> for (child = context->firstchild; child != NULL; child = child->nextchild)\n>> {\n>> - const char *parentname;\n>> -\n>> - /*\n>> - * We labeled dynahash contexts with just the hash table name.\n>> - * To make it possible to identify its parent, we also use\n>> - * the hash table as its context name.\n>> - */\n>> - if (context->ident && strcmp(context->name, \"dynahash\") == 0)\n>> - parentname = context->ident;\n>> - else\n>> - parentname = context->name;\n>> -\n>> PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n>> - child, parentname, level + 1);\n>> + child, name, level + 1);\n> \n> I got it, thanks for the clarification!\n> \n> Attached a revised patch.\n\nThanks for updating the patch! I pushed it.\n\nBTW, I guess that you didn't add the regression test for this view because\nthe contents of the view are not stable. Right? But isn't it better to just\nadd the \"stable\" test like\n\n SELECT name, ident, parent, level, total_bytes >= free_bytes FROM pg_backend_memory_contexts WHERE level = 0;\n\nrather than adding nothing?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 Aug 2020 15:48:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-08-19 15:48, Fujii Masao wrote:\n> On 2020/08/19 9:43, torikoshia wrote:\n>> On 2020-08-18 22:54, Fujii Masao wrote:\n>>> On 2020/08/18 18:41, torikoshia wrote:\n>>>> On 2020-08-17 21:19, Fujii Masao wrote:\n>>>>> On 2020/08/17 21:14, Fujii Masao wrote:\n>>>>>>> On 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n>>>>>>>> The following review has been posted through the commitfest \n>>>>>>>> application:\n>>>>>>>> make installcheck-world: tested, passed\n>>>>>>>> Implements feature: tested, passed\n>>>>>>>> Spec compliant: not tested\n>>>>>>>> Documentation: tested, passed\n>>>>>>>> \n>>>>>>>> I tested the latest\n>>>>>>>> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n>>>>>>>> with the latest PG-version \n>>>>>>>> (199cec9779504c08aaa8159c6308283156547409)\n>>>>>>>> and test was passed.\n>>>>>>>> It looks good to me.\n>>>>>>>> \n>>>>>>>> The new status of this patch is: Ready for Committer\n>>>>>>> \n>>>>>>> Thanks for your testing!\n>>>>>> \n>>>>>> Thanks for updating the patch! Here are the review comments.\n>>>> \n>>>> Thanks for reviewing!\n>>>> \n>>>>>> \n>>>>>> + <row>\n>>>>>> + <entry><link \n>>>>>> linkend=\"view-pg-backend-memory-contexts\"><structname>pg_backend_memory_contexts</structname></link></entry>\n>>>>>> + <entry>backend memory contexts</entry>\n>>>>>> + </row>\n>>>>>> \n>>>>>> The above is located just after pg_matviews entry. But it should \n>>>>>> be located\n>>>>>> just after pg_available_extension_versions entry. Because the rows \n>>>>>> in the table\n>>>>>> \"System Views\" should be located in alphabetical order.\n>>>>>> \n>>>>>> \n>>>>>> + <sect1 id=\"view-pg-backend-memory-contexts\">\n>>>>>> + \n>>>>>> <title><structname>pg_backend_memory_contexts</structname></title>\n>>>>>> \n>>>>>> Same as above.\n>>>> \n>>>> Modified both.\n>>>> \n>>>>>> \n>>>>>> \n>>>>>> + The view <structname>pg_backend_memory_contexts</structname> \n>>>>>> displays all\n>>>>>> + the local backend memory contexts.\n>>>>>> \n>>>>>> This description seems a bit confusing because maybe we can \n>>>>>> interpret this\n>>>>>> as \"... displays the memory contexts of all the local backends\" \n>>>>>> wrongly. Thought?\n>>>>>> What about the following description, instead?\n>>>> \n>>>>>> The view <structname>pg_backend_memory_contexts</structname> \n>>>>>> displays all\n>>>>>> the memory contexts of the server process attached to the \n>>>>>> current session.\n>>>> \n>>>> Thanks! it seems better.\n>>>> \n>>>>>> + const char *name = context->name;\n>>>>>> + const char *ident = context->ident;\n>>>>>> +\n>>>>>> + if (context == NULL)\n>>>>>> + return;\n>>>>>> \n>>>>>> The above check \"context == NULL\" is useless? If \"context\" is \n>>>>>> actually NULL,\n>>>>>> \"context->name\" would cause segmentation fault, so ISTM that the \n>>>>>> check\n>>>>>> will never be performed.\n>>>>>> \n>>>>>> If \"context\" can be NULL, the check should be performed before \n>>>>>> accessing\n>>>>>> to \"contect\". OTOH, if \"context\" must not be NULL per the \n>>>>>> specification of\n>>>>>> PutMemoryContextStatsTupleStore(), assertion test checking\n>>>>>> \"context != NULL\" should be used here, instead?\n>>>> \n>>>> Yeah, \"context\" cannot be NULL because \"context\" must be \n>>>> TopMemoryContext\n>>>> or it is already checked as not NULL as follows(child != NULL).\n>>>> \n>>>> I added the assertion check.\n>>> \n>>> Isn't it better to add AssertArg(MemoryContextIsValid(context)), \n>>> instead?\n>> \n>> Thanks, that's better.\n>> \n>>>> \n>>>> | for (child = context->firstchild; child != NULL; child = \n>>>> child->nextchild)\n>>>> | {\n>>>> | ...\n>>>> | PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n>>>> | child, \n>>>> parentname, level + 1);\n>>>> | }\n>>>> \n>>>>> Here is another comment.\n>>>>> \n>>>>> + if (parent == NULL)\n>>>>> + nulls[2] = true;\n>>>>> + else\n>>>>> + /*\n>>>>> + * We labeled dynahash contexts with just the hash \n>>>>> table name.\n>>>>> + * To make it possible to identify its parent, we \n>>>>> also display\n>>>>> + * parent's ident here.\n>>>>> + */\n>>>>> + if (parent->ident && strcmp(parent->name, \"dynahash\") \n>>>>> == 0)\n>>>>> + values[2] = \n>>>>> CStringGetTextDatum(parent->ident);\n>>>>> + else\n>>>>> + values[2] = \n>>>>> CStringGetTextDatum(parent->name);\n>>>>> \n>>>>> PutMemoryContextsStatsTupleStore() doesn't need \"parent\" memory \n>>>>> context,\n>>>>> but uses only the name of \"parent\" memory context. So isn't it \n>>>>> better to use\n>>>>> \"const char *parent\" instead of \"MemoryContext parent\", as the \n>>>>> argument of\n>>>>> the function? If we do that, we can simplify the above code.\n>>>> \n>>>> Thanks, the attached patch adopted the advice.\n>>>> \n>>>> However, since PutMemoryContextsStatsTupleStore() used not only the \n>>>> name\n>>>> but also the ident of the \"parent\", I could not help but adding \n>>>> similar\n>>>> codes before calling the function.\n>>>> The total amount of codes and complexity seem not to change so much.\n>>>> \n>>>> Any thoughts? Am I misunderstanding something?\n>>> \n>>> I was thinking that we can simplify the code as follows.\n>>> That is, we can just pass \"name\" as the argument of\n>>> PutMemoryContextsStatsTupleStore()\n>>> since \"name\" indicates context->name or ident (if name is \n>>> \"dynahash\").\n>>> \n>>> for (child = context->firstchild; child != NULL; child = \n>>> child->nextchild)\n>>> {\n>>> - const char *parentname;\n>>> -\n>>> - /*\n>>> - * We labeled dynahash contexts with just the hash table \n>>> name.\n>>> - * To make it possible to identify its parent, we also use\n>>> - * the hash table as its context name.\n>>> - */\n>>> - if (context->ident && strcmp(context->name, \"dynahash\") == \n>>> 0)\n>>> - parentname = context->ident;\n>>> - else\n>>> - parentname = context->name;\n>>> -\n>>> PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n>>> - child, parentname, level + 1);\n>>> + child, name, level + 1);\n>> \n>> I got it, thanks for the clarification!\n>> \n>> Attached a revised patch.\n> \n> Thanks for updating the patch! I pushed it.\n\nThanks a lot!\n\n> \n> BTW, I guess that you didn't add the regression test for this view \n> because\n> the contents of the view are not stable. Right? But isn't it better to \n> just\n> add the \"stable\" test like\n> \n> SELECT name, ident, parent, level, total_bytes >= free_bytes FROM\n> pg_backend_memory_contexts WHERE level = 0;\n> \n> rather than adding nothing?\n\nYes, I didn't add regression tests because of the unstability of the \noutput.\nI thought it would be OK since other views like pg_stat_slru and \npg_shmem_allocations\ndidn't have tests for their outputs.\n\nI don't have strong objections for adding tests like you proposed, but \nI'm not sure\nwhether there are special reasons to add tests compared with these \nexisting views.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 Aug 2020 17:40:00 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/19 17:40, torikoshia wrote:\n> On 2020-08-19 15:48, Fujii Masao wrote:\n>> On 2020/08/19 9:43, torikoshia wrote:\n>>> On 2020-08-18 22:54, Fujii Masao wrote:\n>>>> On 2020/08/18 18:41, torikoshia wrote:\n>>>>> On 2020-08-17 21:19, Fujii Masao wrote:\n>>>>>> On 2020/08/17 21:14, Fujii Masao wrote:\n>>>>>>>> On 2020-08-07 16:38, Kasahara Tatsuhito wrote:\n>>>>>>>>> The following review has been posted through the commitfest application:\n>>>>>>>>> make installcheck-world: tested, passed\n>>>>>>>>> Implements feature: tested, passed\n>>>>>>>>> Spec compliant: not tested\n>>>>>>>>> Documentation: tested, passed\n>>>>>>>>>\n>>>>>>>>> I tested the latest\n>>>>>>>>> patch(0007-Adding-a-function-exposing-memory-usage-of-local-backend.patch)\n>>>>>>>>> with the latest PG-version (199cec9779504c08aaa8159c6308283156547409)\n>>>>>>>>> and test was passed.\n>>>>>>>>> It looks good to me.\n>>>>>>>>>\n>>>>>>>>> The new status of this patch is: Ready for Committer\n>>>>>>>>\n>>>>>>>> Thanks for your testing!\n>>>>>>>\n>>>>>>> Thanks for updating the patch! Here are the review comments.\n>>>>>\n>>>>> Thanks for reviewing!\n>>>>>\n>>>>>>>\n>>>>>>> + <row>\n>>>>>>> + <entry><link linkend=\"view-pg-backend-memory-contexts\"><structname>pg_backend_memory_contexts</structname></link></entry>\n>>>>>>> + <entry>backend memory contexts</entry>\n>>>>>>> + </row>\n>>>>>>>\n>>>>>>> The above is located just after pg_matviews entry. But it should be located\n>>>>>>> just after pg_available_extension_versions entry. Because the rows in the table\n>>>>>>> \"System Views\" should be located in alphabetical order.\n>>>>>>>\n>>>>>>>\n>>>>>>> + <sect1 id=\"view-pg-backend-memory-contexts\">\n>>>>>>> + <title><structname>pg_backend_memory_contexts</structname></title>\n>>>>>>>\n>>>>>>> Same as above.\n>>>>>\n>>>>> Modified both.\n>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> + The view <structname>pg_backend_memory_contexts</structname> displays all\n>>>>>>> + the local backend memory contexts.\n>>>>>>>\n>>>>>>> This description seems a bit confusing because maybe we can interpret this\n>>>>>>> as \"... displays the memory contexts of all the local backends\" wrongly. Thought?\n>>>>>>> What about the following description, instead?\n>>>>>\n>>>>>>> The view <structname>pg_backend_memory_contexts</structname> displays all\n>>>>>>> the memory contexts of the server process attached to the current session.\n>>>>>\n>>>>> Thanks! it seems better.\n>>>>>\n>>>>>>> + const char *name = context->name;\n>>>>>>> + const char *ident = context->ident;\n>>>>>>> +\n>>>>>>> + if (context == NULL)\n>>>>>>> + return;\n>>>>>>>\n>>>>>>> The above check \"context == NULL\" is useless? If \"context\" is actually NULL,\n>>>>>>> \"context->name\" would cause segmentation fault, so ISTM that the check\n>>>>>>> will never be performed.\n>>>>>>>\n>>>>>>> If \"context\" can be NULL, the check should be performed before accessing\n>>>>>>> to \"contect\". OTOH, if \"context\" must not be NULL per the specification of\n>>>>>>> PutMemoryContextStatsTupleStore(), assertion test checking\n>>>>>>> \"context != NULL\" should be used here, instead?\n>>>>>\n>>>>> Yeah, \"context\" cannot be NULL because \"context\" must be TopMemoryContext\n>>>>> or it is already checked as not NULL as follows(child != NULL).\n>>>>>\n>>>>> I added the assertion check.\n>>>>\n>>>> Isn't it better to add AssertArg(MemoryContextIsValid(context)), instead?\n>>>\n>>> Thanks, that's better.\n>>>\n>>>>>\n>>>>> | for (child = context->firstchild; child != NULL; child = child->nextchild)\n>>>>> | {\n>>>>> | ...\n>>>>> | PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n>>>>> | child, parentname, level + 1);\n>>>>> | }\n>>>>>\n>>>>>> Here is another comment.\n>>>>>>\n>>>>>> + if (parent == NULL)\n>>>>>> + nulls[2] = true;\n>>>>>> + else\n>>>>>> + /*\n>>>>>> + * We labeled dynahash contexts with just the hash table name.\n>>>>>> + * To make it possible to identify its parent, we also display\n>>>>>> + * parent's ident here.\n>>>>>> + */\n>>>>>> + if (parent->ident && strcmp(parent->name, \"dynahash\") == 0)\n>>>>>> + values[2] = CStringGetTextDatum(parent->ident);\n>>>>>> + else\n>>>>>> + values[2] = CStringGetTextDatum(parent->name);\n>>>>>>\n>>>>>> PutMemoryContextsStatsTupleStore() doesn't need \"parent\" memory context,\n>>>>>> but uses only the name of \"parent\" memory context. So isn't it better to use\n>>>>>> \"const char *parent\" instead of \"MemoryContext parent\", as the argument of\n>>>>>> the function? If we do that, we can simplify the above code.\n>>>>>\n>>>>> Thanks, the attached patch adopted the advice.\n>>>>>\n>>>>> However, since PutMemoryContextsStatsTupleStore() used not only the name\n>>>>> but also the ident of the \"parent\", I could not help but adding similar\n>>>>> codes before calling the function.\n>>>>> The total amount of codes and complexity seem not to change so much.\n>>>>>\n>>>>> Any thoughts? Am I misunderstanding something?\n>>>>\n>>>> I was thinking that we can simplify the code as follows.\n>>>> That is, we can just pass \"name\" as the argument of\n>>>> PutMemoryContextsStatsTupleStore()\n>>>> since \"name\" indicates context->name or ident (if name is \"dynahash\").\n>>>>\n>>>> for (child = context->firstchild; child != NULL; child = child->nextchild)\n>>>> {\n>>>> - const char *parentname;\n>>>> -\n>>>> - /*\n>>>> - * We labeled dynahash contexts with just the hash table name.\n>>>> - * To make it possible to identify its parent, we also use\n>>>> - * the hash table as its context name.\n>>>> - */\n>>>> - if (context->ident && strcmp(context->name, \"dynahash\") == 0)\n>>>> - parentname = context->ident;\n>>>> - else\n>>>> - parentname = context->name;\n>>>> -\n>>>> PutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n>>>> - child, parentname, level + 1);\n>>>> + child, name, level + 1);\n>>>\n>>> I got it, thanks for the clarification!\n>>>\n>>> Attached a revised patch.\n>>\n>> Thanks for updating the patch! I pushed it.\n> \n> Thanks a lot!\n> \n>>\n>> BTW, I guess that you didn't add the regression test for this view because\n>> the contents of the view are not stable. Right? But isn't it better to just\n>> add the \"stable\" test like\n>>\n>> SELECT name, ident, parent, level, total_bytes >= free_bytes FROM\n>> pg_backend_memory_contexts WHERE level = 0;\n>>\n>> rather than adding nothing?\n> \n> Yes, I didn't add regression tests because of the unstability of the output.\n> I thought it would be OK since other views like pg_stat_slru and pg_shmem_allocations\n> didn't have tests for their outputs.\n\nYou're right.\n\n> I don't have strong objections for adding tests like you proposed, but I'm not sure\n> whether there are special reasons to add tests compared with these existing views.\n\nOk, understood. So I'd withdraw my suggestion.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 Aug 2020 18:12:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 06:12:02PM +0900, Fujii Masao wrote:\n> On 2020/08/19 17:40, torikoshia wrote:\n>> Yes, I didn't add regression tests because of the unstability of the output.\n>> I thought it would be OK since other views like pg_stat_slru and pg_shmem_allocations\n>> didn't have tests for their outputs.\n> \n> You're right.\n\nIf you can make a test with something minimal and with a stable\noutput, adding a test is helpful IMO, or how can you make easily sure\nthat this does not get broken, particularly in the event of future\nrefactorings, or even with platform-dependent behaviors?\n\nBy the way, I was looking at the code that has been committed, and I\nthink that it is awkward to have a SQL function in mcxt.c, which is a\nrather low-level interface. I think that this new code should be\nmoved to its own file, one suggestion for a location I have being\nsrc/backend/utils/adt/mcxtfuncs.c.\n--\nMichael",
"msg_date": "Wed, 19 Aug 2020 22:55:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Hadn't been paying attention to this thread up till now, but ...\n\nMichael Paquier <michael@paquier.xyz> writes:\n> By the way, I was looking at the code that has been committed, and I\n> think that it is awkward to have a SQL function in mcxt.c, which is a\n> rather low-level interface. I think that this new code should be\n> moved to its own file, one suggestion for a location I have being\n> src/backend/utils/adt/mcxtfuncs.c.\n\nI agree with that, but I think this patch has a bigger problem:\nwhy bother at all? It seems like a waste of code space and future\nmaintenance effort, because there is no use-case. In the situations\nwhere you need to know where the memory went, you are almost never\nin a position to leisurely execute a query and send the results over\nto your client. This certainly would be useless to figure out why\nan already-running query is eating space, for instance.\n\nThe only situation I could imagine where this would have any use is\nwhere there is long-term (cross-query) bloat in, say, CacheMemoryContext\n--- but it's not even very helpful for that, since you can't examine\nanything finer-grain than a memory context. Plus you need to be\nrunning an interactive session, or else be willing to hack up your\napplication to try to get it to inspect the view (and log the\nresults somewhere) at useful times.\n\nOn top of all that, the functionality has Heisenberg problems,\nbecause simply using it changes what you are trying to observe,\nin complex and undocumented ways (not that the documentation\nwould be of any use to non-experts anyway).\n\nMy own thoughts about improving the debugging situation would've been\nto create a way to send a signal to a session to make it dump its\ncurrent memory map to the postmaster log (not the client, since the\nclient is unlikely to be prepared to receive anything extraneous).\nBut this is nothing like that.\n\nGiven the lack of clear use-case, and the possibility (admittedly\nnot strong) that this is still somehow a security hazard, I think\nwe should revert it. If it stays, I'd like to see restrictions\non who can read the view.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Aug 2020 11:01:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-19 11:01:37 -0400, Tom Lane wrote:\n> Hadn't been paying attention to this thread up till now, but ...\n> \n> Michael Paquier <michael@paquier.xyz> writes:\n> > By the way, I was looking at the code that has been committed, and I\n> > think that it is awkward to have a SQL function in mcxt.c, which is a\n> > rather low-level interface. I think that this new code should be\n> > moved to its own file, one suggestion for a location I have being\n> > src/backend/utils/adt/mcxtfuncs.c.\n> \n> I agree with that, but I think this patch has a bigger problem:\n> why bother at all? It seems like a waste of code space and future\n> maintenance effort, because there is no use-case. In the situations\n> where you need to know where the memory went, you are almost never\n> in a position to leisurely execute a query and send the results over\n> to your client. This certainly would be useless to figure out why\n> an already-running query is eating space, for instance.\n\nI don't agree with this at all. I think there's plenty use cases. It's\ne.g. very common to try to figure out why the memory usage of a process\nis high. Is it memory not returned to the OS? Is it caches that have\ngrown too much etc.\n\nI agree it's not perfect:\n\n> Plus you need to be running an interactive session, or else be willing\n> to hack up your application to try to get it to inspect the view (and\n> log the results somewhere) at useful times.\n\nand that we likely should address that by *also* allowing to view the\nmemory usage of another process. Which obviously isn't entirely\ntrivial. But some infrastructure likely could be reused.\n\n\n> My own thoughts about improving the debugging situation would've been\n> to create a way to send a signal to a session to make it dump its\n> current memory map to the postmaster log (not the client, since the\n> client is unlikely to be prepared to receive anything extraneous).\n> But this is nothing like that.\n\nThat doesn't really work in a large number of environments, I'm\nafraid. Many many users don't have access to the server log.\n\n\n> If it stays, I'd like to see restrictions on who can read the view.\n\nAs long as access is grantable rather than needing a security definer\nwrapper I'd be fine with that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Aug 2020 15:53:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-08-19 11:01:37 -0400, Tom Lane wrote:\n>> I agree with that, but I think this patch has a bigger problem:\n>> why bother at all? It seems like a waste of code space and future\n>> maintenance effort, because there is no use-case.\n\n> I don't agree with this at all. I think there's plenty use cases. It's\n> e.g. very common to try to figure out why the memory usage of a process\n> is high. Is it memory not returned to the OS? Is it caches that have\n> grown too much etc.\n\nOh, I agree completely that there are lots of use-cases for finding\nout what a process' memory map looks like. But this patch fails to\naddress any of them in a usable way.\n\n>> My own thoughts about improving the debugging situation would've been\n>> to create a way to send a signal to a session to make it dump its\n>> current memory map to the postmaster log (not the client, since the\n>> client is unlikely to be prepared to receive anything extraneous).\n\n> That doesn't really work in a large number of environments, I'm\n> afraid. Many many users don't have access to the server log.\n\nMy rationale for that was (a) it can be implemented without a lot\nof impact on the memory map, and (b) requiring access to the server\nlog eliminates questions about whether you have enough privilege to\nexamine the map. I'm prepared to compromise about (b), but less so\nabout (a). Having to run a SQL query to find this out is a mess.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Aug 2020 21:29:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-19 21:29:06 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-08-19 11:01:37 -0400, Tom Lane wrote:\n> >> I agree with that, but I think this patch has a bigger problem:\n> >> why bother at all? It seems like a waste of code space and future\n> >> maintenance effort, because there is no use-case.\n> \n> > I don't agree with this at all. I think there's plenty use cases. It's\n> > e.g. very common to try to figure out why the memory usage of a process\n> > is high. Is it memory not returned to the OS? Is it caches that have\n> > grown too much etc.\n> \n> Oh, I agree completely that there are lots of use-cases for finding\n> out what a process' memory map looks like. But this patch fails to\n> address any of them in a usable way.\n\nEven just being able to see the memory usage in a queryable way is a\nhuge benefit. The difference over having to parse the log, then parse\nthe memory usage dump, and then aggregate the data in there in a\nmeaningful way is *huge*. We've been slacking around lowering our\nmemory usage, and I think the fact that it's annoying to analyze is a\npartial reason for that.\n\nI totally agree that it's not *enough*, but in contrast to you I think\nit's a good step. Subsequently we should add a way to get any backends\nmemory usage.\nIt's not too hard to imagine how to serialize it in a way that can be\neasily deserialized by another backend. I am imagining something like\nsending a procsignal that triggers (probably at CFR() time) a backend to\nwrite its own memory usage into pg_memusage/<pid> or something roughly\nlike that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Aug 2020 18:43:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/20 0:01, Tom Lane wrote:\n> Hadn't been paying attention to this thread up till now, but ...\n> \n> Michael Paquier <michael@paquier.xyz> writes:\n>> By the way, I was looking at the code that has been committed, and I\n>> think that it is awkward to have a SQL function in mcxt.c, which is a\n>> rather low-level interface. I think that this new code should be\n>> moved to its own file, one suggestion for a location I have being\n>> src/backend/utils/adt/mcxtfuncs.c.\n> \n> I agree with that, but I think this patch has a bigger problem:\n> why bother at all? It seems like a waste of code space and future\n> maintenance effort, because there is no use-case. In the situations\n> where you need to know where the memory went, you are almost never\n> in a position to leisurely execute a query and send the results over\n> to your client. This certainly would be useless to figure out why\n> an already-running query is eating space, for instance.\n> \n> The only situation I could imagine where this would have any use is\n> where there is long-term (cross-query) bloat in, say, CacheMemoryContext\n\nYes, this feature is useful to check a cross-query memory bloat like\nthe bloats of relcache, prepared statements, PL/pgSQL cache,\nSMgrRelationHash, etc. For example, several years before, my colleague\ninvestigated the cause of the memory bloat by using the almost same\nfeature that pg_cheat_funcs extension provides, and then found that\nthe cause was that the application forgot to release lots of SAVEPONT.\n\n\n> --- but it's not even very helpful for that, since you can't examine\n> anything finer-grain than a memory context.\n\nYes, but even that information can be good hint when investigating\nthe memory bloat.\n\n\n> Plus you need to be\n> running an interactive session, or else be willing to hack up your\n> application to try to get it to inspect the view (and log the\n> results somewhere) at useful times.\n> \n> On top of all that, the functionality has Heisenberg problems,\n> because simply using it changes what you are trying to observe,\n> in complex and undocumented ways (not that the documentation\n> would be of any use to non-experts anyway).\n> \n> My own thoughts about improving the debugging situation would've been\n> to create a way to send a signal to a session to make it dump its\n> current memory map to the postmaster log (not the client, since the\n> client is unlikely to be prepared to receive anything extraneous).\n> But this is nothing like that.\n\nI agree that we need something like this, i.e., the way to monitor the memory\nusage of any process even during query running. OTOH, I think that the added\nfeature has a use case and is good as the first step.\n\n\n> Given the lack of clear use-case, and the possibility (admittedly\n> not strong) that this is still somehow a security hazard, I think\n> we should revert it. If it stays, I'd like to see restrictions\n> on who can read the view.\n\nFor example, allowing only the role with pg_monitor to see this view?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 20 Aug 2020 11:09:42 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/20 10:43, Andres Freund wrote:\n> Hi,\n> \n> On 2020-08-19 21:29:06 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2020-08-19 11:01:37 -0400, Tom Lane wrote:\n>>>> I agree with that, but I think this patch has a bigger problem:\n>>>> why bother at all? It seems like a waste of code space and future\n>>>> maintenance effort, because there is no use-case.\n>>\n>>> I don't agree with this at all. I think there's plenty use cases. It's\n>>> e.g. very common to try to figure out why the memory usage of a process\n>>> is high. Is it memory not returned to the OS? Is it caches that have\n>>> grown too much etc.\n>>\n>> Oh, I agree completely that there are lots of use-cases for finding\n>> out what a process' memory map looks like. But this patch fails to\n>> address any of them in a usable way.\n> \n> Even just being able to see the memory usage in a queryable way is a\n> huge benefit.\n\n+1\n\n> The difference over having to parse the log, then parse\n> the memory usage dump, and then aggregate the data in there in a\n> meaningful way is *huge*. We've been slacking around lowering our\n> memory usage, and I think the fact that it's annoying to analyze is a\n> partial reason for that.\n\nAgreed.\n\n \n> I totally agree that it's not *enough*, but in contrast to you I think\n> it's a good step. Subsequently we should add a way to get any backends\n> memory usage.\n> It's not too hard to imagine how to serialize it in a way that can be\n> easily deserialized by another backend. I am imagining something like\n> sending a procsignal that triggers (probably at CFR() time) a backend to\n> write its own memory usage into pg_memusage/<pid> or something roughly\n> like that.\n\nSounds good. Maybe we can also provide the SQL-callable function\nor view to read pg_memusage/<pid>, to make the analysis easier.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 20 Aug 2020 11:17:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020/08/20 0:01, Tom Lane wrote:\n> The only situation I could imagine where this would have any use is\n> where there is long-term (cross-query) bloat in, say, CacheMemoryContext\nYeah, in cases where a very large number of sessions are connected to\nthe DB for very long periods of time, the memory consumption of the\nback-end processes may increase slowly, and such a feature is useful\nfor analysis.\n\nAnd ,as Fujii said, this feature very useful to see which contexts are\nconsuming a lot of memory and to narrow down the causes.\n\nOn Thu, Aug 20, 2020 at 11:18 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2020/08/20 10:43, Andres Freund wrote:\n> > Hi,\n> > Even just being able to see the memory usage in a queryable way is a\n> > huge benefit.\n>\n> +1\n+1\n\nI think this feature is very useful in environments where gdb is not\navailable or access to server logs is limited.\nAnd it is cumbersome to extract and analyze the memory information\nfrom the very large server logs.\n\n\n> > I totally agree that it's not *enough*, but in contrast to you I think\n> > it's a good step. Subsequently we should add a way to get any backends\n> > memory usage.\n> > It's not too hard to imagine how to serialize it in a way that can be\n> > easily deserialized by another backend. I am imagining something like\n> > sending a procsignal that triggers (probably at CFR() time) a backend to\n> > write its own memory usage into pg_memusage/<pid> or something roughly\n> > like that.\n>\n> Sounds good. Maybe we can also provide the SQL-callable function\n> or view to read pg_memusage/<pid>, to make the analysis easier.\n+1\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n",
"msg_date": "Thu, 20 Aug 2020 11:59:20 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "Thanks for all your comments!\n\nThankfully it seems that this feature is regarded as not\nmeaningless one, I'm going to do some improvements.\n\n\nOn Wed, Aug 19, 2020 at 10:56 PM Michael Paquier <michael@paquier.xyz> \nwrote:\n> On Wed, Aug 19, 2020 at 06:12:02PM +0900, Fujii Masao wrote:\n>> On 2020/08/19 17:40, torikoshia wrote:\n>>> Yes, I didn't add regression tests because of the unstability of the \n>>> output.\n>>> I thought it would be OK since other views like pg_stat_slru and \n>>> pg_shmem_allocations\n>>> didn't have tests for their outputs.\n>> \n>> You're right.\n> \n> If you can make a test with something minimal and with a stable\n> output, adding a test is helpful IMO, or how can you make easily sure\n> that this does not get broken, particularly in the event of future\n> refactorings, or even with platform-dependent behaviors?\n\nOK. Added a regression test on sysviews.sql.\n(0001-Added-a-regression-test-for-pg_backend_memory_contex.patch)\n\nFujii-san gave us an example, but I added more simple one considering\nthe simplicity of other tests on that.\n\n\nOn Thu, Aug 20, 2020 at 12:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > By the way, I was looking at the code that has been committed, and I\n> > think that it is awkward to have a SQL function in mcxt.c, which is a\n> > rather low-level interface. I think that this new code should be\n> > moved to its own file, one suggestion for a location I have being\n> > src/backend/utils/adt/mcxtfuncs.c.\n> \n> I agree with that,\n\nThanks for pointing out.\nAdded a patch for relocating the codes to mcxtfuncs.c.\n(patches/0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch)\n\n\n> On Thu, Aug 20, 2020 at 11:09 AM Fujii Masao \n> <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/08/20 0:01, Tom Lane wrote:\n>> Given the lack of clear use-case, and the possibility (admittedly\n>> not strong) that this is still somehow a security hazard, I think\n>> we should revert it. If it stays, I'd like to see restrictions\n>> on who can read the view.\n\n> For example, allowing only the role with pg_monitor to see this view?\n\nAttached a patch adding that restriction.\n(0001-Restrict-the-access-to-pg_backend_memory_contexts-to.patch)\n\nOf course, this restriction makes pg_backend_memory_contexts hard to use\nwhen the user of the target session is not granted pg_monitor because \nthe\nscope of this view is session local.\n\nIn this case, I imagine additional operations something like temporarily\ngranting pg_monitor to that user.\n\nThoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Fri, 21 Aug 2020 23:27:06 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Fri, Aug 21, 2020 at 11:27:06PM +0900, torikoshia wrote:\n> OK. Added a regression test on sysviews.sql.\n> (0001-Added-a-regression-test-for-pg_backend_memory_contex.patch)\n> \n> Fujii-san gave us an example, but I added more simple one considering\n> the simplicity of other tests on that.\n\nWhat you have sent in 0001 looks fine to me. A small test is much\nbetter than nothing.\n\n> Added a patch for relocating the codes to mcxtfuncs.c.\n> (patches/0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch)\n\nThe same code is moved around line-by-line.\n\n> Of course, this restriction makes pg_backend_memory_contexts hard to use\n> when the user of the target session is not granted pg_monitor because the\n> scope of this view is session local.\n> \n> In this case, I imagine additional operations something like temporarily\n> granting pg_monitor to that user.\n\nHmm. I am not completely sure either that pg_monitor is the best fit\nhere, because this view provides information about a bunch of internal\nstructures. Something that could easily be done though is to revoke\nthe access from public, and then users could just set up GRANT\npermissions post-initdb, with pg_monitor as one possible choice. This\nis the safest path by default, and this stuff is of a caliber similar\nto pg_shmem_allocations in terms of internal contents.\n\nIt seems to me that you are missing one \"REVOKE ALL on\npg_backend_memory_contexts FROM PUBLIC\" in patch 0003.\n\nBy the way, if that was just for me, I would remove used_bytes, which\nis just a computation from the total and free numbers. I'll defer\nthat point to Fujii-san.\n--\nMichael",
"msg_date": "Sat, 22 Aug 2020 21:18:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-08-22 21:18, Michael Paquier wrote:\n\nThanks for reviewing!\n\n> On Fri, Aug 21, 2020 at 11:27:06PM +0900, torikoshia wrote:\n>> OK. Added a regression test on sysviews.sql.\n>> (0001-Added-a-regression-test-for-pg_backend_memory_contex.patch)\n>> \n>> Fujii-san gave us an example, but I added more simple one considering\n>> the simplicity of other tests on that.\n> \n> What you have sent in 0001 looks fine to me. A small test is much\n> better than nothing.\n> \n>> Added a patch for relocating the codes to mcxtfuncs.c.\n>> (patches/0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch)\n> \n> The same code is moved around line-by-line.\n> \n>> Of course, this restriction makes pg_backend_memory_contexts hard to \n>> use\n>> when the user of the target session is not granted pg_monitor because \n>> the\n>> scope of this view is session local.\n>> \n>> In this case, I imagine additional operations something like \n>> temporarily\n>> granting pg_monitor to that user.\n> \n> Hmm. I am not completely sure either that pg_monitor is the best fit\n> here, because this view provides information about a bunch of internal\n> structures. Something that could easily be done though is to revoke\n> the access from public, and then users could just set up GRANT\n> permissions post-initdb, with pg_monitor as one possible choice. This\n> is the safest path by default, and this stuff is of a caliber similar\n> to pg_shmem_allocations in terms of internal contents.\n\nI think this is a better way than what I did in\n0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch.\n\nAttached a patch.\n\n> \n> It seems to me that you are missing one \"REVOKE ALL on\n> pg_backend_memory_contexts FROM PUBLIC\" in patch 0003.\n> \n> By the way, if that was just for me, I would remove used_bytes, which\n> is just a computation from the total and free numbers. I'll defer\n> that point to Fujii-san.\n> --\n> Michael\n\n\nOn 2020/08/20 2:59, Kasahara Tatsuhito wrote:\n>>> I totally agree that it's not *enough*, but in contrast to you I \n>>> think\n>>> it's a good step. Subsequently we should add a way to get any \n>>> backends\n>>> memory usage.\n>>> It's not too hard to imagine how to serialize it in a way that can be\n>>> easily deserialized by another backend. I am imagining something like\n>>> sending a procsignal that triggers (probably at CFR() time) a backend \n>>> to\n>>> write its own memory usage into pg_memusage/<pid> or something \n>>> roughly\n>>> like that.\n>> \n>> Sounds good. Maybe we can also provide the SQL-callable function\n>> or view to read pg_memusage/<pid>, to make the analysis easier.\n> +1\n\n\nI'm thinking about starting a new thread to discuss exposing other\nbackend's memory context.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Mon, 24 Aug 2020 13:01:42 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/24 13:01, torikoshia wrote:\n> On 2020-08-22 21:18, Michael Paquier wrote:\n> \n> Thanks for reviewing!\n> \n>> On Fri, Aug 21, 2020 at 11:27:06PM +0900, torikoshia wrote:\n>>> OK. Added a regression test on sysviews.sql.\n>>> (0001-Added-a-regression-test-for-pg_backend_memory_contex.patch)\n>>>\n>>> Fujii-san gave us an example, but I added more simple one considering\n>>> the simplicity of other tests on that.\n>>\n>> What you have sent in 0001 looks fine to me.� A small test is much\n>> better than nothing.\n\n+1\n\nBut as I proposed upthread, what about a bit complicated test as follows,\ne.g., to confirm that the internal logic for level works expectedly?\n\n SELECT name, ident, parent, level, total_bytes >= free_bytes FROM pg_backend_memory_contexts WHERE level = 0;\n\n\n>>\n>>> Added a patch for relocating the codes to mcxtfuncs.c.\n>>> (patches/0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch)\n\nThanks for the patch! Looks good to me.\nBarring any objection, I will commit this patch at first.\n\n\n>>\n>> The same code is moved around line-by-line.\n>>\n>>> Of course, this restriction makes pg_backend_memory_contexts hard to use\n>>> when the user of the target session is not granted pg_monitor because the\n>>> scope of this view is session local.\n>>>\n>>> In this case, I imagine additional operations something like temporarily\n>>> granting pg_monitor to that user.\n>>\n>> Hmm.� I am not completely sure either that pg_monitor is the best fit\n>> here, because this view provides information about a bunch of internal\n>> structures.� Something that could easily be done though is to revoke\n>> the access from public, and then users could just set up GRANT\n>> permissions post-initdb, with pg_monitor as one possible choice.� This\n>> is the safest path by default, and this stuff is of a caliber similar\n>> to pg_shmem_allocations in terms of internal contents.\n> \n> I think this is a better way than what I did in\n> 0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch.\n\nYou mean 0001-Restrict-the-access-to-pg_backend_memory_contexts-to.patch?\n\n\n> Attached a patch.\n\nThanks for updating the patch! This also looks good to me.\n\n\n>> It seems to me that you are missing one \"REVOKE ALL on\n>> pg_backend_memory_contexts FROM PUBLIC\" in patch 0003.\n>>\n>> By the way, if that was just for me, I would remove used_bytes, which\n>> is just a computation from the total and free numbers.� I'll defer\n>> that point to Fujii-san.\n\nYeah, I was just thinking that displaying also used_bytes was useful,\nbut this might be inconsistent with the other views' ways.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 24 Aug 2020 13:13:39 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/24 13:13, Fujii Masao wrote:\n> \n> \n> On 2020/08/24 13:01, torikoshia wrote:\n>> On 2020-08-22 21:18, Michael Paquier wrote:\n>>\n>> Thanks for reviewing!\n>>\n>>> On Fri, Aug 21, 2020 at 11:27:06PM +0900, torikoshia wrote:\n>>>> OK. Added a regression test on sysviews.sql.\n>>>> (0001-Added-a-regression-test-for-pg_backend_memory_contex.patch)\n>>>>\n>>>> Fujii-san gave us an example, but I added more simple one considering\n>>>> the simplicity of other tests on that.\n>>>\n>>> What you have sent in 0001 looks fine to me.� A small test is much\n>>> better than nothing.\n> \n> +1\n> \n> But as I proposed upthread, what about a bit complicated test as follows,\n> e.g., to confirm that the internal logic for level works expectedly?\n> \n> ���� SELECT name, ident, parent, level, total_bytes >= free_bytes FROM pg_backend_memory_contexts WHERE level = 0;\n> \n> \n>>>\n>>>> Added a patch for relocating the codes to mcxtfuncs.c.\n>>>> (patches/0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch)\n> \n> Thanks for the patch! Looks good to me.\n> Barring any objection, I will commit this patch at first.\n\nAs far as I know, utils/adt is the directory to basically include the files\nfor a particular type or operator. So ISTM that mcxtfuncs.c doesn't\nfit to this directory. Isn't it better to put that in utils/mmgr ?\n\n\nThe copyright line in new file mcxtfuncs.c should be changed as follows\nbecause it contains new code?\n\n- * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n- * Portions Copyright (c) 1994, Regents of the University of California\n+ * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 24 Aug 2020 14:48:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 02:48:50PM +0900, Fujii Masao wrote:\n> As far as I know, utils/adt is the directory to basically include the files\n> for a particular type or operator. So ISTM that mcxtfuncs.c doesn't\n> fit to this directory. Isn't it better to put that in utils/mmgr ?\n\nWe have also stuff like ruleutils.c, dbsize.c, genfile.c there which\nis rather generic, so I would rather leave utils/mmgr/ out of the\nbusiness of this thread, and just keep inside all the lower-level APIs\nfor memory context handling. I don't have a strong feeling for one\nbeing better than the other, so if you prefer more one way than the\nother, that's fine by me as long as the split is done as the new\nfunctions depend on nothing static in mcxt.c. And you are the\ncommitter of this feature.\n\n> The copyright line in new file mcxtfuncs.c should be changed as follows\n> because it contains new code?\n\n> - * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> - * Portions Copyright (c) 1994, Regents of the University of California\n> + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n\nFWIW, I usually choose what's proposed in the patch as a matter of\nconsistency, because it is a no-brainer and because you don't have to\nthink about past references when it comes to structures or such.\n--\nMichael",
"msg_date": "Mon, 24 Aug 2020 17:09:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "On 2020-08-24 13:13, Fujii Masao wrote:\n> On 2020/08/24 13:01, torikoshia wrote:\n>> On 2020-08-22 21:18, Michael Paquier wrote:\n>> \n>> Thanks for reviewing!\n>> \n>>> On Fri, Aug 21, 2020 at 11:27:06PM +0900, torikoshia wrote:\n>>>> OK. Added a regression test on sysviews.sql.\n>>>> (0001-Added-a-regression-test-for-pg_backend_memory_contex.patch)\n>>>> \n>>>> Fujii-san gave us an example, but I added more simple one \n>>>> considering\n>>>> the simplicity of other tests on that.\n>>> \n>>> What you have sent in 0001 looks fine to me. A small test is much\n>>> better than nothing.\n> \n> +1\n> \n> But as I proposed upthread, what about a bit complicated test as \n> follows,\n> e.g., to confirm that the internal logic for level works expectedly?\n> \n> SELECT name, ident, parent, level, total_bytes >= free_bytes FROM\n> pg_backend_memory_contexts WHERE level = 0;\n\nOK!\nAttached an updated patch.\n\n> \n> \n>>> \n>>>> Added a patch for relocating the codes to mcxtfuncs.c.\n>>>> (patches/0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch)\n> \n> Thanks for the patch! Looks good to me.\n> Barring any objection, I will commit this patch at first.\n> \n> \n>>> \n>>> The same code is moved around line-by-line.\n>>> \n>>>> Of course, this restriction makes pg_backend_memory_contexts hard to \n>>>> use\n>>>> when the user of the target session is not granted pg_monitor \n>>>> because the\n>>>> scope of this view is session local.\n>>>> \n>>>> In this case, I imagine additional operations something like \n>>>> temporarily\n>>>> granting pg_monitor to that user.\n>>> \n>>> Hmm. I am not completely sure either that pg_monitor is the best fit\n>>> here, because this view provides information about a bunch of \n>>> internal\n>>> structures. Something that could easily be done though is to revoke\n>>> the access from public, and then users could just set up GRANT\n>>> permissions post-initdb, with pg_monitor as one possible choice. \n>>> This\n>>> is the safest path by default, and this stuff is of a caliber similar\n>>> to pg_shmem_allocations in terms of internal contents.\n>> \n>> I think this is a better way than what I did in\n>> 0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch.\n> \n> You mean \n> 0001-Restrict-the-access-to-pg_backend_memory_contexts-to.patch?\n\nOops, I meant \n0001-Restrict-the-access-to-pg_backend_memory_contexts-to.patch.\n\n> \n> \n>> Attached a patch.\n> \n> Thanks for updating the patch! This also looks good to me.\n> \n> \n>>> It seems to me that you are missing one \"REVOKE ALL on\n>>> pg_backend_memory_contexts FROM PUBLIC\" in patch 0003.\n>>> \n>>> By the way, if that was just for me, I would remove used_bytes, which\n>>> is just a computation from the total and free numbers. I'll defer\n>>> that point to Fujii-san.\n> \n> Yeah, I was just thinking that displaying also used_bytes was useful,\n> but this might be inconsistent with the other views' ways.\n> \n> Regards,\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Mon, 24 Aug 2020 21:56:06 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/24 17:09, Michael Paquier wrote:\n> On Mon, Aug 24, 2020 at 02:48:50PM +0900, Fujii Masao wrote:\n>> As far as I know, utils/adt is the directory to basically include the files\n>> for a particular type or operator. So ISTM that mcxtfuncs.c doesn't\n>> fit to this directory. Isn't it better to put that in utils/mmgr ?\n> \n> We have also stuff like ruleutils.c, dbsize.c, genfile.c there which\n> is rather generic, so I would rather leave utils/mmgr/ out of the\n> business of this thread, and just keep inside all the lower-level APIs\n> for memory context handling.\n\nUnderstood. So I will commit the latest patch 0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 25 Aug 2020 11:12:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/24 21:56, torikoshia wrote:\n> On 2020-08-24 13:13, Fujii Masao wrote:\n>> On 2020/08/24 13:01, torikoshia wrote:\n>>> On 2020-08-22 21:18, Michael Paquier wrote:\n>>>\n>>> Thanks for reviewing!\n>>>\n>>>> On Fri, Aug 21, 2020 at 11:27:06PM +0900, torikoshia wrote:\n>>>>> OK. Added a regression test on sysviews.sql.\n>>>>> (0001-Added-a-regression-test-for-pg_backend_memory_contex.patch)\n>>>>>\n>>>>> Fujii-san gave us an example, but I added more simple one considering\n>>>>> the simplicity of other tests on that.\n>>>>\n>>>> What you have sent in 0001 looks fine to me. A small test is much\n>>>> better than nothing.\n>>\n>> +1\n>>\n>> But as I proposed upthread, what about a bit complicated test as follows,\n>> e.g., to confirm that the internal logic for level works expectedly?\n>>\n>> SELECT name, ident, parent, level, total_bytes >= free_bytes FROM\n>> pg_backend_memory_contexts WHERE level = 0;\n> \n> OK!\n> Attached an updated patch.\n\nThanks for updating the patch! Looks good to me.\nBarring any objection, I will commit this patch.\n\n> \n>>\n>>\n>>>>\n>>>>> Added a patch for relocating the codes to mcxtfuncs.c.\n>>>>> (patches/0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch)\n>>\n>> Thanks for the patch! Looks good to me.\n>> Barring any objection, I will commit this patch at first.\n>>\n>>\n>>>>\n>>>> The same code is moved around line-by-line.\n>>>>\n>>>>> Of course, this restriction makes pg_backend_memory_contexts hard to use\n>>>>> when the user of the target session is not granted pg_monitor because the\n>>>>> scope of this view is session local.\n>>>>>\n>>>>> In this case, I imagine additional operations something like temporarily\n>>>>> granting pg_monitor to that user.\n>>>>\n>>>> Hmm. I am not completely sure either that pg_monitor is the best fit\n>>>> here, because this view provides information about a bunch of internal\n>>>> structures. Something that could easily be done though is to revoke\n>>>> the access from public, and then users could just set up GRANT\n>>>> permissions post-initdb, with pg_monitor as one possible choice. This\n>>>> is the safest path by default, and this stuff is of a caliber similar\n>>>> to pg_shmem_allocations in terms of internal contents.\n>>>\n>>> I think this is a better way than what I did in\n>>> 0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch.\n>>\n>> You mean 0001-Restrict-the-access-to-pg_backend_memory_contexts-to.patch?\n> \n> Oops, I meant 0001-Restrict-the-access-to-pg_backend_memory_contexts-to.patch.\n\nThis patch also looks good to me. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 25 Aug 2020 11:39:44 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
},
{
"msg_contents": "\n\nOn 2020/08/25 11:39, Fujii Masao wrote:\n> \n> \n> On 2020/08/24 21:56, torikoshia wrote:\n>> On 2020-08-24 13:13, Fujii Masao wrote:\n>>> On 2020/08/24 13:01, torikoshia wrote:\n>>>> On 2020-08-22 21:18, Michael Paquier wrote:\n>>>>\n>>>> Thanks for reviewing!\n>>>>\n>>>>> On Fri, Aug 21, 2020 at 11:27:06PM +0900, torikoshia wrote:\n>>>>>> OK. Added a regression test on sysviews.sql.\n>>>>>> (0001-Added-a-regression-test-for-pg_backend_memory_contex.patch)\n>>>>>>\n>>>>>> Fujii-san gave us an example, but I added more simple one considering\n>>>>>> the simplicity of other tests on that.\n>>>>>\n>>>>> What you have sent in 0001 looks fine to me. A small test is much\n>>>>> better than nothing.\n>>>\n>>> +1\n>>>\n>>> But as I proposed upthread, what about a bit complicated test as follows,\n>>> e.g., to confirm that the internal logic for level works expectedly?\n>>>\n>>> SELECT name, ident, parent, level, total_bytes >= free_bytes FROM\n>>> pg_backend_memory_contexts WHERE level = 0;\n>>\n>> OK!\n>> Attached an updated patch.\n> \n> Thanks for updating the patch! Looks good to me.\n> Barring any objection, I will commit this patch.\n> \n>>\n>>>\n>>>\n>>>>>\n>>>>>> Added a patch for relocating the codes to mcxtfuncs.c.\n>>>>>> (patches/0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch)\n>>>\n>>> Thanks for the patch! Looks good to me.\n>>> Barring any objection, I will commit this patch at first.\n>>>\n>>>\n>>>>>\n>>>>> The same code is moved around line-by-line.\n>>>>>\n>>>>>> Of course, this restriction makes pg_backend_memory_contexts hard to use\n>>>>>> when the user of the target session is not granted pg_monitor because the\n>>>>>> scope of this view is session local.\n>>>>>>\n>>>>>> In this case, I imagine additional operations something like temporarily\n>>>>>> granting pg_monitor to that user.\n>>>>>\n>>>>> Hmm. I am not completely sure either that pg_monitor is the best fit\n>>>>> here, because this view provides information about a bunch of internal\n>>>>> structures. Something that could easily be done though is to revoke\n>>>>> the access from public, and then users could just set up GRANT\n>>>>> permissions post-initdb, with pg_monitor as one possible choice. This\n>>>>> is the safest path by default, and this stuff is of a caliber similar\n>>>>> to pg_shmem_allocations in terms of internal contents.\n>>>>\n>>>> I think this is a better way than what I did in\n>>>> 0001-Rellocated-the-codes-for-pg_backend_memory_contexts-.patch.\n>>>\n>>> You mean 0001-Restrict-the-access-to-pg_backend_memory_contexts-to.patch?\n>>\n>> Oops, I meant 0001-Restrict-the-access-to-pg_backend_memory_contexts-to.patch.\n> \n> This patch also looks good to me. Thanks!\n\nI pushed the proposed three patches. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 26 Aug 2020 10:54:19 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating a function for exposing memory usage of backend process"
}
] |
[
{
"msg_contents": "Hi,\n\nAs Amul sent a patch about \"ALTER SYSTEM READ ONLY\"[1], with similar futur\nobjectives than mine, I decided to share the humble patch I am playing with to\nstep down an instance from primary to standby status.\n\nI'm still wondering about the coding style, but as the discussion about this\nkind of feature is rising, I share it in an early stage so it has a chance to\nbe discussed.\n\nI'm opening a new discussion to avoid disturbing Amul's one. \n\nThe design of my patch is similar to the crash recovery code, without resetting\nthe shared memory. It supports smart and fast demote. The only existing user\ninterface currently is \"pg_ctl [-m smart|fast] demote\". An SQL admin function,\neg. pg_demote(), would be easy to add.\n\nMain difference with Amul's patch is that all backends must be disconnected to\nprocess with the demote. Either we wait for them to disconnect (smart) or we\nkill them (fast). This makes life much easier from the code point of view, but\nmuch more naive as well. Eg. calling \"SELECT pg_demote('fast')\" as an admin\nwould kill the session, with no options to wait for the action to finish, as we\ndo with pg_promote(). Keeping read only session around could probably be\nachieved using global barrier as Amul did, but without all the complexity\nrelated to WAL writes prohibition.\n\nThere's still some questions in the current patch. As I wrote, it's an humble\npatch, a proof of concept, a bit naive.\n\nDoes it worth discussing it and improving it further or do I miss something\nobvious in this design that leads to a dead end?\n\nThanks.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/flat/CAAJ_b97KZzdJsffwRK7w0XU5HnXkcgKgTR69t8cOZztsyXjkQw%40mail.gmail.com",
"msg_date": "Wed, 17 Jun 2020 17:44:51 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "[patch] demote"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 11:45 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> As Amul sent a patch about \"ALTER SYSTEM READ ONLY\"[1], with similar futur\n> objectives than mine, I decided to share the humble patch I am playing with to\n> step down an instance from primary to standby status.\n\nCool! This was vaguely on my hit list, but neither I nor any of my\ncolleagues had gotten the time and energy to have a go at it.\n\n> Main difference with Amul's patch is that all backends must be disconnected to\n> process with the demote. Either we wait for them to disconnect (smart) or we\n> kill them (fast). This makes life much easier from the code point of view, but\n> much more naive as well. Eg. calling \"SELECT pg_demote('fast')\" as an admin\n> would kill the session, with no options to wait for the action to finish, as we\n> do with pg_promote(). Keeping read only session around could probably be\n> achieved using global barrier as Amul did, but without all the complexity\n> related to WAL writes prohibition.\n>\n> There's still some questions in the current patch. As I wrote, it's an humble\n> patch, a proof of concept, a bit naive.\n>\n> Does it worth discussing it and improving it further or do I miss something\n> obvious in this design that leads to a dead end?\n\nI haven't looked at your code, but I think we should view the two\nefforts as complementing each other, not competing. With both patches\nin play, a clean switchover would look like this:\n\n- first use ALTER SYSTEM READ ONLY (or whatever we decide to call it)\nto make the primary read only, killing off write transactions\n- next use pg_ctl promote to promote the standby\n- finally use pg_ctl demote (or whatever we decide to call it) to turn\nthe read-only primary into a standby of the new primary\n\nI think this would be waaaaay better than what you have to do today,\nwhich as I mentioned in my reply to Tom on the other thread, is very\ncomplicated and error-prone. I think with the combination of that\npatch and this one (or some successor version of each) we could get to\na point where the tooling to do a clean switchover is relatively easy\nto write and doesn't involve having to shut down the server completely\nat any point. If we can do it while also preserving connections, at\nleast for read-only queries, that's a better user experience, but as\nTom pointed out over there, there are real concerns about the\ncomplexity of these patches, so it may be that the approach you've\ntaken of just killing everything is safer and thus a superior choice\noverall.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:29:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-17 17:44:51 +0200, Jehan-Guillaume de Rorthais wrote:\n> As Amul sent a patch about \"ALTER SYSTEM READ ONLY\"[1], with similar futur\n> objectives than mine, I decided to share the humble patch I am playing with to\n> step down an instance from primary to standby status.\n\nTo make sure we are on the same page: What your patch intends to do is\nto leave the server running, but switch from being a primary to\nreplicating from another system. Correct?\n\n\n> Main difference with Amul's patch is that all backends must be disconnected to\n> process with the demote. Either we wait for them to disconnect (smart) or we\n> kill them (fast). This makes life much easier from the code point of view, but\n> much more naive as well. Eg. calling \"SELECT pg_demote('fast')\" as an admin\n> would kill the session, with no options to wait for the action to finish, as we\n> do with pg_promote(). Keeping read only session around could probably be\n> achieved using global barrier as Amul did, but without all the complexity\n> related to WAL writes prohibition.\n\nFWIW just doing that for normal backends isn't sufficient, you also have\nto deal with bgwriter, checkpointer, ... triggering WAL writes (FPWs due\nto hint bits, the checkpoint record, and some more).\n\n\n> There's still some questions in the current patch. As I wrote, it's an humble\n> patch, a proof of concept, a bit naive.\n> \n> Does it worth discussing it and improving it further or do I miss something\n> obvious in this design that leads to a dead end?\n\nI don't think there's a fundamental issue, but I think it needs to deal\nwith a lot more things than it does right now. StartupXLOG doesn't\ncurrently deal correctly with subsystems that are already\ninitialized. And your patch doesn't make it so as far as I can tell.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jun 2020 11:14:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Wed, 17 Jun 2020 12:29:31 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n[...]\n> > Main difference with Amul's patch is that all backends must be disconnected\n> > to process with the demote. Either we wait for them to disconnect (smart)\n> > or we kill them (fast). This makes life much easier from the code point of\n> > view, but much more naive as well. Eg. calling \"SELECT pg_demote('fast')\"\n> > as an admin would kill the session, with no options to wait for the action\n> > to finish, as we do with pg_promote(). Keeping read only session around\n> > could probably be achieved using global barrier as Amul did, but without\n> > all the complexity related to WAL writes prohibition.\n> >\n> > There's still some questions in the current patch. As I wrote, it's an\n> > humble patch, a proof of concept, a bit naive.\n> >\n> > Does it worth discussing it and improving it further or do I miss something\n> > obvious in this design that leads to a dead end? \n> \n> I haven't looked at your code, but I think we should view the two\n> efforts as complementing each other, not competing.\n\nThat was part of my feeling. I like the idea of keeping readonly backends\naround. But I'm not convinced by the \"ALTER SYSTEM READ ONLY\" feature on its\nown.\n\nAt some expense, Admin can already set the system as readonly from the\napplication point of view, using:\n\n alter system set default_transaction_read_only TO on;\n select pg_reload_conf();\n\nCurrent RW xact will finish, but no other will be allowed.\n\n> With both patches in play, a clean switchover would look like this:\n> \n> - first use ALTER SYSTEM READ ONLY (or whatever we decide to call it)\n> to make the primary read only, killing off write transactions\n> - next use pg_ctl promote to promote the standby\n> - finally use pg_ctl demote (or whatever we decide to call it) to turn\n> the read-only primary into a standby of the new primary\n\nI'm not sure how useful ALTER SYSTEM READ ONLY is, outside of the switchover\nscope. This seems like it should be included in the demote process itself. If we\nfocus on user experience, my first original goal was:\n\n* demote the primary\n* promote a standby\n\nLater down the path of various additional patches (keep readonly backends, add\npg_demote(), etc), we could extend the replication protocol so a switchover can\nbe negotiated and controlled from the nodes themselves.\n\n> I think this would be waaaaay better than what you have to do today,\n> which as I mentioned in my reply to Tom on the other thread, is very\n> complicated and error-prone.\n\nWell, I agree, the current procedure to achieve a clean switchover is\ndifficult from the user point of view. \n\nFor the record, in PAF (a Pacemaker user agent) we are parsing the pg_waldump\noutput to check if the designated standby to promote received the shutdown\ncheckpoint from the primary. If it does, we accept promoting.\n\nManually, I usually shutdown the primary, checkpoint on standby, compare \"REDO\nlocation\" from both side, then promote.\n\n> I think with the combination of that\n> patch and this one (or some successor version of each) we could get to\n> a point where the tooling to do a clean switchover is relatively easy\n> to write and doesn't involve having to shut down the server completely\n> at any point.\n\nThat would be great yes.\n\n> If we can do it while also preserving connections, at\n> least for read-only queries, that's a better user experience, but as\n> Tom pointed out over there, there are real concerns about the\n> complexity of these patches, so it may be that the approach you've\n> taken of just killing everything is safer and thus a superior choice\n> overall.\n\nAs far as this approach doesn't close futur doors to keep readonly backends\naround, that might be a good first step.\n\nThank you!\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:02:02 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Wed, 17 Jun 2020 11:14:47 -0700\nAndres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n> \n> On 2020-06-17 17:44:51 +0200, Jehan-Guillaume de Rorthais wrote:\n> > As Amul sent a patch about \"ALTER SYSTEM READ ONLY\"[1], with similar futur\n> > objectives than mine, I decided to share the humble patch I am playing with\n> > to step down an instance from primary to standby status. \n> \n> To make sure we are on the same page: What your patch intends to do is\n> to leave the server running, but switch from being a primary to\n> replicating from another system. Correct?\n\nYes. The instance status is retrograded from \"in production\" to \"in archive\nrecovery\".\n\nOf course, it will start replicating depending on archive_mode/command and\nprimary_conninfo setup.\n\n> > Main difference with Amul's patch is that all backends must be disconnected\n> > to process with the demote. Either we wait for them to disconnect (smart)\n> > or we kill them (fast). This makes life much easier from the code point of\n> > view, but much more naive as well. Eg. calling \"SELECT pg_demote('fast')\"\n> > as an admin would kill the session, with no options to wait for the action\n> > to finish, as we do with pg_promote(). Keeping read only session around\n> > could probably be achieved using global barrier as Amul did, but without\n> > all the complexity related to WAL writes prohibition. \n> \n> FWIW just doing that for normal backends isn't sufficient, you also have\n> to deal with bgwriter, checkpointer, ... triggering WAL writes (FPWs due\n> to hint bits, the checkpoint record, and some more).\n\nIn fact, the patch relies on existing code path in the state machine. The\nstartup process is started when the code enters in PM_NO_CHILDREN state. This\nstate is set when «These other guys should be dead already» as stated in the\ncode:\n\n\t\t\t/* These other guys should be dead already */\n\t\t\tAssert(StartupPID == 0);\n\t\t\tAssert(WalReceiverPID == 0);\n\t\t\tAssert(BgWriterPID == 0);\n\t\t\tAssert(CheckpointerPID == 0);\n\t\t\tAssert(WalWriterPID == 0);\n\t\t\tAssert(AutoVacPID == 0);\n\t\t\t/* syslogger is not considered here */\n\t\t\tpmState = PM_NO_CHILDREN;\n\n> > There's still some questions in the current patch. As I wrote, it's an\n> > humble patch, a proof of concept, a bit naive.\n> > \n> > Does it worth discussing it and improving it further or do I miss something\n> > obvious in this design that leads to a dead end? \n> \n> I don't think there's a fundamental issue, but I think it needs to deal\n> with a lot more things than it does right now. StartupXLOG doesn't\n> currently deal correctly with subsystems that are already\n> initialized. And your patch doesn't make it so as far as I can tell.\n\nIf you are talking about bgwriter, checkpointer, etc, as far as I understand\nthe current state machine, my patch actually deal with them.\n\nThank you for your feedback!\n\nI'll study how hard it would be to keep read only backends around during the\ndemote step.\n\nRegards,\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:16:27 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "\n\nOn 2020/06/18 1:29, Robert Haas wrote:\n> On Wed, Jun 17, 2020 at 11:45 AM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n>> As Amul sent a patch about \"ALTER SYSTEM READ ONLY\"[1], with similar futur\n>> objectives than mine, I decided to share the humble patch I am playing with to\n>> step down an instance from primary to standby status.\n> \n> Cool! This was vaguely on my hit list, but neither I nor any of my\n> colleagues had gotten the time and energy to have a go at it.\n> \n>> Main difference with Amul's patch is that all backends must be disconnected to\n>> process with the demote. Either we wait for them to disconnect (smart) or we\n>> kill them (fast). This makes life much easier from the code point of view, but\n>> much more naive as well. Eg. calling \"SELECT pg_demote('fast')\" as an admin\n>> would kill the session, with no options to wait for the action to finish, as we\n>> do with pg_promote(). Keeping read only session around could probably be\n>> achieved using global barrier as Amul did, but without all the complexity\n>> related to WAL writes prohibition.\n>>\n>> There's still some questions in the current patch. As I wrote, it's an humble\n>> patch, a proof of concept, a bit naive.\n>>\n>> Does it worth discussing it and improving it further or do I miss something\n>> obvious in this design that leads to a dead end?\n> \n> I haven't looked at your code, but I think we should view the two\n> efforts as complementing each other, not competing. With both patches\n> in play, a clean switchover would look like this:\n> \n> - first use ALTER SYSTEM READ ONLY (or whatever we decide to call it)\n> to make the primary read only, killing off write transactions\n> - next use pg_ctl promote to promote the standby\n> - finally use pg_ctl demote (or whatever we decide to call it) to turn\n> the read-only primary into a standby of the new primary\n\nISTM that a clean switchover is possible without \"ALTER SYSTEM READ ONLY\".\nWhat about the following procedure?\n\n1. Demote the primary to a standby. Then this demoted standby is read-only.\n2. The orignal standby automatically establishes the cascading replication\n connection with the demoted standby.\n3. Wait for all the WAL records available in the demoted standby to be streamed\n to the orignal standby.\n4. Promote the original standby to new primary.\n5. Change primary_conninfo in the demoted standby so that it establishes\n the replication connection with new primary.\n\nSo it seems enough to implement \"demote\" feature for a clean switchover.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 18 Jun 2020 21:41:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 6:02 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> At some expense, Admin can already set the system as readonly from the\n> application point of view, using:\n>\n> alter system set default_transaction_read_only TO on;\n> select pg_reload_conf();\n>\n> Current RW xact will finish, but no other will be allowed.\n\nThat doesn't block all WAL generation, though:\n\nrhaas=# alter system set default_transaction_read_only TO on;\nALTER SYSTEM\nrhaas=# select pg_reload_conf();\n pg_reload_conf\n----------------\n t\n(1 row)\nrhaas=# cluster pgbench_accounts_pkey on pgbench_accounts;\nrhaas=#\n\nThere's a bunch of other things it also doesn't block, too. If you're\ntrying to switch to a new primary, you really want to stop WAL\ngeneration completely on the old one. Otherwise, you can't guarantee\nthat the machine you're going to promote is completely caught up,\nwhich means you might lose some changes, and you might have to\npg_rewind the old master.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 11:15:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 8:41 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> ISTM that a clean switchover is possible without \"ALTER SYSTEM READ ONLY\".\n> What about the following procedure?\n>\n> 1. Demote the primary to a standby. Then this demoted standby is read-only.\n> 2. The orignal standby automatically establishes the cascading replication\n> connection with the demoted standby.\n> 3. Wait for all the WAL records available in the demoted standby to be streamed\n> to the orignal standby.\n> 4. Promote the original standby to new primary.\n> 5. Change primary_conninfo in the demoted standby so that it establishes\n> the replication connection with new primary.\n>\n> So it seems enough to implement \"demote\" feature for a clean switchover.\n\nThere's something to that idea. I think it somewhat depends on how\ninvasive the various operations are. For example, I'm not really sure\nhow feasible it is to demote without a full server restart that kicks\nout all sessions. If that is required, it's a significant disadvantage\ncompared to ASRO. On the other hand, if a machine can be demoted just\nby kicking out R/W sessions, as ASRO currently does, then maybe\nthere's not that much difference. Or maybe both designs are subject to\nimprovement and we can do something even less invasive...\n\nOne thing I think people are going to want to do is have the master go\nread-only if it loses communication to the rest of the network, to\navoid or at least mitigate split-brain. However, such network\ninterruptions are often transient, so it might not be uncommon to\nbriefly go read-only due to a network blip, but then recover quickly\nand return to a read-write state. It doesn't seem to matter much\nwhether that read-only state is a new kind of normal operation (like\nwhat ASRO would do) or whether we've actually returned to a recovery\nstate (as demote would do) but the collateral effects of the state\nchange do matter.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 11:22:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Thu, 18 Jun 2020 11:15:02 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jun 18, 2020 at 6:02 AM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> > At some expense, Admin can already set the system as readonly from the\n> > application point of view, using:\n> >\n> > alter system set default_transaction_read_only TO on;\n> > select pg_reload_conf();\n> >\n> > Current RW xact will finish, but no other will be allowed. \n> \n> That doesn't block all WAL generation, though:\n> \n> rhaas=# alter system set default_transaction_read_only TO on;\n> ALTER SYSTEM\n> rhaas=# select pg_reload_conf();\n> pg_reload_conf\n> ----------------\n> t\n> (1 row)\n> rhaas=# cluster pgbench_accounts_pkey on pgbench_accounts;\n> rhaas=#\n\nYes, this, and the fact that any user can switch transaction_read_only back to\non easily. This was a terrible example.\n\nMy point was that ALTER SYSTEM READ ONLY as described here doesn't feel like a\nrequired user feature, outside of the demote scope. It might be useful for the\ndemote process, but only from the core point of view, without user interaction.\nIt seems there's no other purpose from the admin standpoint.\n\n> There's a bunch of other things it also doesn't block, too. If you're\n> trying to switch to a new primary, you really want to stop WAL\n> generation completely on the old one. Otherwise, you can't guarantee\n> that the machine you're going to promote is completely caught up,\n> which means you might lose some changes, and you might have to\n> pg_rewind the old master.\n\nYes, of course. I wasn't explaining transaction_read_only was useful in a \nswitchover procedure, sorry for the confusion and misleading comment.\n\nRegards,\n\n\n",
"msg_date": "Thu, 18 Jun 2020 17:27:32 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Thu, 18 Jun 2020 11:22:47 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jun 18, 2020 at 8:41 AM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> > ISTM that a clean switchover is possible without \"ALTER SYSTEM READ ONLY\".\n> > What about the following procedure?\n> >\n> > 1. Demote the primary to a standby. Then this demoted standby is read-only.\n> > 2. The orignal standby automatically establishes the cascading replication\n> > connection with the demoted standby.\n> > 3. Wait for all the WAL records available in the demoted standby to be\n> > streamed to the orignal standby.\n> > 4. Promote the original standby to new primary.\n> > 5. Change primary_conninfo in the demoted standby so that it establishes\n> > the replication connection with new primary.\n> >\n> > So it seems enough to implement \"demote\" feature for a clean switchover. \n> \n> There's something to that idea. I think it somewhat depends on how\n> invasive the various operations are. For example, I'm not really sure\n> how feasible it is to demote without a full server restart that kicks\n> out all sessions. If that is required, it's a significant disadvantage\n> compared to ASRO. On the other hand, if a machine can be demoted just\n> by kicking out R/W sessions, as ASRO currently does, then maybe\n> there's not that much difference. Or maybe both designs are subject to\n> improvement and we can do something even less invasive...\n\nConsidering the current demote patch improvement. I was considering to digg in\nthe following direction:\n\n* add a new state in the state machine where all backends are idle\n* this new state forbid any new writes, the same fashion we do on standby nodes\n* this state could either wait for end of xact, or cancel/kill\n RW backends, in the same fashion current smart/fast stop do\n* from this state, we might then rollback pending prepared xact, stop other\n sub-process etc (as the current patch does), and demote safely to\n PM_RECOVERY or PM_HOT_STANDBY (depending on the setup).\n\nIs it something worth considering?\nMaybe the code will be so close from ASRO, it would just be kind of a fusion of\nboth patch? I don't know, I didn't look at the ASRO patch yet.\n\n> One thing I think people are going to want to do is have the master go\n> read-only if it loses communication to the rest of the network, to\n> avoid or at least mitigate split-brain. However, such network\n> interruptions are often transient, so it might not be uncommon to\n> briefly go read-only due to a network blip, but then recover quickly\n> and return to a read-write state. It doesn't seem to matter much\n> whether that read-only state is a new kind of normal operation (like\n> what ASRO would do) or whether we've actually returned to a recovery\n> state (as demote would do) but the collateral effects of the state\n> change do matter.\n\nWell, triggering such actions (demote or read only) often occurs external\ndecision, hopefully relying on at least some quorum and being able to escalade\nto watchdog or fencing is required.\n\nMost tools around will need to demote or fence. It seems dangerous to flip\nbetween read only/read write on a bunch of cluster nodes. It might be quickly\nmessy, especially since a former primary with non replicated data could\nautomatically replicate from a new primary without screaming...\n\nRegards,\n\n\n",
"msg_date": "Thu, 18 Jun 2020 17:56:07 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 11:56 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> Considering the current demote patch improvement. I was considering to digg in\n> the following direction:\n>\n> * add a new state in the state machine where all backends are idle\n> * this new state forbid any new writes, the same fashion we do on standby nodes\n> * this state could either wait for end of xact, or cancel/kill\n> RW backends, in the same fashion current smart/fast stop do\n> * from this state, we might then rollback pending prepared xact, stop other\n> sub-process etc (as the current patch does), and demote safely to\n> PM_RECOVERY or PM_HOT_STANDBY (depending on the setup).\n>\n> Is it something worth considering?\n> Maybe the code will be so close from ASRO, it would just be kind of a fusion of\n> both patch? I don't know, I didn't look at the ASRO patch yet.\n\nI don't think that the postmaster state machine is the interesting\npart of this problem. The tricky parts have to do with updating shared\nmemory state, and with updating per-backend private state. For\nexample, snapshots are taken in a different way during recovery than\nthey are in normal operation, hence SnapshotData's takenDuringRecovery\nmember. And I think that we allocate extra shared memory space for\nstoring the data that those snapshots use if, and only if, the server\nstarts up in recovery. So if the server goes backward from normal\nrunning into recovery, we might not have the space that we need in\nshared memory to store the extra data, and even if we had the space it\nmight not be populated correctly, and the code that takes snapshots\nmight not be written properly to handle multiple transitions between\nrecovery and normal running, or even a single backward transition.\n\nIn general, there's code scattered all throughout the system that\nassumes the recovery -> normal running transition is one-way. If we go\nback into recovery by killing off all backends and reinitializing\nshared memory, then we don't have to worry about that stuff. If we do\nanything less than that, we have to find all the code that relies on\nnever reentering recovery and fix it all. Now it's also true that we\nhave to do some other things, like restarting the startup process, and\nstopping things like autovacuum, and the postmaster may need to be\ninvolved in some of that. There's clearly some engineering work there,\nbut I think it's substantially less than the amount of engineering\nwork involved in fixing problems with shared memory contents and\nbackend-local state.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:33:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "\n\nOn 2020/06/19 0:22, Robert Haas wrote:\n> On Thu, Jun 18, 2020 at 8:41 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> ISTM that a clean switchover is possible without \"ALTER SYSTEM READ ONLY\".\n>> What about the following procedure?\n>>\n>> 1. Demote the primary to a standby. Then this demoted standby is read-only.\n>> 2. The orignal standby automatically establishes the cascading replication\n>> connection with the demoted standby.\n>> 3. Wait for all the WAL records available in the demoted standby to be streamed\n>> to the orignal standby.\n>> 4. Promote the original standby to new primary.\n>> 5. Change primary_conninfo in the demoted standby so that it establishes\n>> the replication connection with new primary.\n>>\n>> So it seems enough to implement \"demote\" feature for a clean switchover.\n> \n> There's something to that idea. I think it somewhat depends on how\n> invasive the various operations are. For example, I'm not really sure\n> how feasible it is to demote without a full server restart that kicks\n> out all sessions. If that is required, it's a significant disadvantage\n> compared to ASRO.\n\nEven with ASRO, the server restart is necessary and RO sessions are\nkicked out when demoting RO primary to a standby, i.e., during a clean\nswitchover?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Jun 2020 01:55:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-18 12:16:27 +0200, Jehan-Guillaume de Rorthais wrote:\n> On Wed, 17 Jun 2020 11:14:47 -0700\n> > I don't think there's a fundamental issue, but I think it needs to deal\n> > with a lot more things than it does right now. StartupXLOG doesn't\n> > currently deal correctly with subsystems that are already\n> > initialized. And your patch doesn't make it so as far as I can tell.\n> \n> If you are talking about bgwriter, checkpointer, etc, as far as I understand\n> the current state machine, my patch actually deal with them.\n\nI'm talking about subsystems like subtrans multixact etc not being ok\nwith suddenly being initialized a second time. You cannot call\nStartupXLOG twice without making modifications to it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Jun 2020 10:10:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-18 21:41:45 +0900, Fujii Masao wrote:\n> ISTM that a clean switchover is possible without \"ALTER SYSTEM READ ONLY\".\n> What about the following procedure?\n> \n> 1. Demote the primary to a standby. Then this demoted standby is read-only.\n\nAs far as I can tell this step includes ALTER SYSTEM READ ONLY. Sure you\ncan choose not to expose ASRO to the user separately from demote, but\nthat's a miniscule part of the complexity.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Jun 2020 10:15:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 12:55 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> Even with ASRO, the server restart is necessary and RO sessions are\n> kicked out when demoting RO primary to a standby, i.e., during a clean\n> switchover?\n\nThe ASRO patch doesn't provide a way to put a running server to be put\nback into recovery, so yes, that is required, unless some other patch\nfixes it so that it isn't. It wouldn't be better to find a way where\nwe never need to kill of R/O sessions at all, and I think that would\nrequire all the machinery from the ASRO patch plus some more. If you\nwant to allow sessions to survive a state transition like this -\nwhether it's to a WAL-read-only state or all the way back to recovery\n- you need a way to prevent further WAL writes in those sessions. Most\nof the stuff that the ASRO patch does is concerned with that. So it\ndoesn't seem likely to me that we can just throw all that code away,\nunless by chance somebody else has got a better version of the same\nthing already. To go back to recovery rather than just to a read-only\nstate, I think you'd need to grapple with some additional issues that\npatch doesn't touch, like some of the snapshot-taking stuff, but I\nthink you still need to solve all of the problems that it does deal\nwith, unless you're OK with killing every session.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 13:16:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... To go back to recovery rather than just to a read-only\n> state, I think you'd need to grapple with some additional issues that\n> patch doesn't touch, like some of the snapshot-taking stuff, but I\n> think you still need to solve all of the problems that it does deal\n> with, unless you're OK with killing every session.\n\nIt seems like this is the core decision that needs to be taken. If\nwe're willing to have these state transitions include a server restart,\nthen many things get simpler. If we're not, it's gonna cost us in\ncode complexity and hence bugs. Maybe the usability gain is worth it,\nor maybe not.\n\nI think it would probably be worth the trouble to pursue both designs in\nparallel for awhile, so we can get a better handle on exactly how much\ncomplexity we're buying into with the more ambitious definition.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 13:24:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-18 13:24:38 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > ... To go back to recovery rather than just to a read-only\n> > state, I think you'd need to grapple with some additional issues that\n> > patch doesn't touch, like some of the snapshot-taking stuff, but I\n> > think you still need to solve all of the problems that it does deal\n> > with, unless you're OK with killing every session.\n> \n> It seems like this is the core decision that needs to be taken. If\n> we're willing to have these state transitions include a server restart,\n> then many things get simpler. If we're not, it's gonna cost us in\n> code complexity and hence bugs. Maybe the usability gain is worth it,\n> or maybe not.\n> \n> I think it would probably be worth the trouble to pursue both designs in\n> parallel for awhile, so we can get a better handle on exactly how much\n> complexity we're buying into with the more ambitious definition.\n\nWhat I like about ALTER SYSTEM READ ONLY is that it basically would\nlikely be a part of both a restart and a non-restart based\nimplementation.\n\nI don't really get why the demote in this thread is mentioned as an\nalternative - it pretty obviously has to include a large portion of\nALTER SYSTEM READ ONLY.\n\nThe only part that could really be skipped by going straight to demote\nis a way to make ASRO invocable directly. You can simplify a bit more by\nkilling all user sessions, but at that point there's not that much\nupshot for having no-restart version of demote in the first place.\n\nThe demote patch in this thread doesn't even start to attack much of the\nreal complexity around turning a primary into a standby.\n\n\nTo me the complexity of a restartless demotion are likely worth it. But\nit doesn't seem feasible to get there in one large step. So adding\nindividually usable sub-steps like ASRO makes sense imo.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Jun 2020 10:49:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 1:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It seems like this is the core decision that needs to be taken. If\n> we're willing to have these state transitions include a server restart,\n> then many things get simpler. If we're not, it's gonna cost us in\n> code complexity and hence bugs. Maybe the usability gain is worth it,\n> or maybe not.\n>\n> I think it would probably be worth the trouble to pursue both designs in\n> parallel for awhile, so we can get a better handle on exactly how much\n> complexity we're buying into with the more ambitious definition.\n\nI wouldn't vote to reject a patch that performed a demotion by doing a\nfull server restart, because it's a useful incremental step, but I\nwouldn't be excited about spending a lot of time on it, either,\nbecause it's basically crippleware by design. Either you have to\ncheckpoint before restarting, or you have to run recovery after\nrestarting, and either of those can be extremely slow. You also break\nall the connections, which can produce application errors unless the\napplications are pretty robustly designed, and you lose the entire\ncontents of shared_buffers, which makes things run very slowly even\nafter the restart is completed, which can cause a lengthy slow period\neven after the system is nominally back up. All of those things are\nreally bad, and AFAICT the first one is the worst by a considerable\nmargin. It can take 20 minutes to checkpoint and even longer to run\nrecovery, so doing this once per year already puts you outside of\nfive-nines territory, and we release critical updates that cannot be\napplied without a server restart about four times per year. That means\nthat if you perform updates by using a switchover -- a common practice\n-- and if you apply all of your updates in a timely fashion --\nunfortunately, not such a common practice, but one we'd surely like to\nencourage -- you can't even achieve four nines if a switchover\nrequires either a checkpoint or running recovery. And that's without\naccounting for any switchovers that you may need to perform for\nreasons unrelated to upgrades, and without accounting for any\nunplanned downtime. Not many people in 2020 are interested in running\na system with three nines of availability, so I think it is clear that\nwe need to do better.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 14:30:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nHere is a summary of my work during the last few days on this demote approach.\n\nPlease, find in attachment v2-0001-Demote-PoC.patch and the comments in the\ncommit message and as FIXME in code.\n\nThe patch is not finished or bug-free yet, I'm still not very happy with the\ncoding style, it probably lack some more code documentation, but a lot has\nchanged since v1. It's still a PoC to push the discussion a bit further after\nbeing myself silent for some days.\n\nThe patch is currently relying on a demote checkpoint. I understand a forced\ncheckpoint overhead can be massive and cause major wait/downtime. But I keep\nthis for a later step. Maybe we should be able to cancel a running checkpoint?\nOr leave it to its synching work but discard the result without wirting it to\nXLog?\n\nI hadn't time to investigate Robert's concern about shared memory for snapshot\nduring recovery.\n\nThe patch doesn't deal with prepared xact yet. Testing \"start->demote->promote\"\nraise an assert if some prepared xact exist. I suppose I will rollback them\nduring demote in next patch version.\n\nI'm not sure how to divide this patch in multiple small independent steps. I\nsuppose I can split it like:\n\n1. add demote checkpoint\n2. support demote: mostly postmaster, startup/xlog and checkpointer related\n code\n3. cli using pg_ctl demote\n\n...But I'm not sure it worth it.\n\nRegards,",
"msg_date": "Thu, 25 Jun 2020 19:27:54 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hello.\n\nAt Thu, 25 Jun 2020 19:27:54 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> Here is a summary of my work during the last few days on this demote approach.\n> \n> Please, find in attachment v2-0001-Demote-PoC.patch and the comments in the\n> commit message and as FIXME in code.\n> \n> The patch is not finished or bug-free yet, I'm still not very happy with the\n> coding style, it probably lack some more code documentation, but a lot has\n> changed since v1. It's still a PoC to push the discussion a bit further after\n> being myself silent for some days.\n> \n> The patch is currently relying on a demote checkpoint. I understand a forced\n> checkpoint overhead can be massive and cause major wait/downtime. But I keep\n> this for a later step. Maybe we should be able to cancel a running checkpoint?\n> Or leave it to its synching work but discard the result without wirting it to\n> XLog?\n\nIf we are going to dive so close to server shutdown, we can just\nutilize the restart-after-crash path, which we can assume to work\nreliably. The attached is a quite rough sketch, hijacking smart\nshutdown path for a convenience, of that but seems working. \"pg_ctl\n-m s -W stop\" lets server demote.\n\n> I hadn't time to investigate Robert's concern about shared memory for snapshot\n> during recovery.\n\nThe patch does all required clenaup of resources including shared\nmemory, I believe. It's enough if we don't need to keep any resources\nalive?\n\n> The patch doesn't deal with prepared xact yet. Testing \"start->demote->promote\"\n> raise an assert if some prepared xact exist. I suppose I will rollback them\n> during demote in next patch version.\n> \n> I'm not sure how to divide this patch in multiple small independent steps. I\n> suppose I can split it like:\n> \n> 1. add demote checkpoint\n> 2. support demote: mostly postmaster, startup/xlog and checkpointer related\n> code\n> 3. cli using pg_ctl demote\n> \n> ...But I'm not sure it worth it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 26 Jun 2020 16:14:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Mmm. Fat finger..\n\nAt Fri, 26 Jun 2020 16:14:38 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Hello.\n> \n> If we are going to dive so close to server shutdown, we can just\n> utilize the restart-after-crash path, which we can assume to work\n> reliably. The attached is a quite rough sketch, hijacking smart\n> shutdown path for a convenience, of that but seems working. \"pg_ctl\n> -m s -W stop\" lets server demote.\n> \n> > I hadn't time to investigate Robert's concern about shared memory for snapshot\n> > during recovery.\n> \n> The patch does all required clenaup of resources including shared\n\nThe path does all required clenaup of..\n\n> memory, I believe. It's enough if we don't need to keep any resources\n> alive?\n> \n> > The patch doesn't deal with prepared xact yet. Testing \"start->demote->promote\"\n> > raise an assert if some prepared xact exist. I suppose I will rollback them\n> > during demote in next patch version.\n> > \n> > I'm not sure how to divide this patch in multiple small independent steps. I\n> > suppose I can split it like:\n> > \n> > 1. add demote checkpoint\n> > 2. support demote: mostly postmaster, startup/xlog and checkpointer related\n> > code\n> > 3. cli using pg_ctl demote\n> > \n> > ...But I'm not sure it worth it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Jun 2020 16:26:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Fri, 26 Jun 2020 16:14:38 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Hello.\n> \n> At Thu, 25 Jun 2020 19:27:54 +0200, Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote in \n> > Here is a summary of my work during the last few days on this demote\n> > approach.\n> > \n> > Please, find in attachment v2-0001-Demote-PoC.patch and the comments in the\n> > commit message and as FIXME in code.\n> > \n> > The patch is not finished or bug-free yet, I'm still not very happy with the\n> > coding style, it probably lack some more code documentation, but a lot has\n> > changed since v1. It's still a PoC to push the discussion a bit further\n> > after being myself silent for some days.\n> > \n> > The patch is currently relying on a demote checkpoint. I understand a forced\n> > checkpoint overhead can be massive and cause major wait/downtime. But I keep\n> > this for a later step. Maybe we should be able to cancel a running\n> > checkpoint? Or leave it to its synching work but discard the result without\n> > wirting it to XLog?\n> \n> If we are going to dive so close to server shutdown, we can just\n> utilize the restart-after-crash path, which we can assume to work\n> reliably. The attached is a quite rough sketch, hijacking smart\n> shutdown path for a convenience, of that but seems working. \"pg_ctl\n> -m s -W stop\" lets server demote.\n\nThis was actually my very first toy PoC.\n\nHowever, resetting everything is far from a graceful demote I was seeking for.\nMoreover, such a patch will not be able to evolve to eg. keep read only\nbackends around.\n\n> > I hadn't time to investigate Robert's concern about shared memory for\n> > snapshot during recovery.\n> \n> The patch does all required clenaup of resources including shared\n> memory, I believe. It's enough if we don't need to keep any resources\n> alive?\n\nResetting everything might not be enough. If I understand Robert's concern\ncorrectly, it might actually need more shmem for hot standby xact snapshot. Or\nmaybe some shmem init'ed differently.\n\nRegards,\n\n\n",
"msg_date": "Wed, 1 Jul 2020 12:15:55 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nHere is a small activity summary since last report.\n\nOn Thu, 25 Jun 2020 19:27:54 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n[...]\n> I hadn't time to investigate Robert's concern about shared memory for snapshot\n> during recovery.\n\nI hadn't time to dig very far, but I suppose this might be related to the\ncomment in ProcArrayShmemSize(). If I'm right, then it seems the space is\nalready allocated as long as hot_standby is enabled. I realize it doesn't means\nwe are on the safe side of the fence though. I still have to have a better\nunderstanding on this.\n\n> The patch doesn't deal with prepared xact yet. Testing\n> \"start->demote->promote\" raise an assert if some prepared xact exist. I\n> suppose I will rollback them during demote in next patch version.\n\nRollback all prepared transaction on demote seems easy. However, I realized\nthere's no point to cancel them. After the demote action, they might still be\ncommitted later on a promoted instance.\n\nI am currently trying to clean shared memory for existing prepared transaction\nso they are handled by the startup process during recovery.\nI've been able to clean TwoPhaseState and the ProcArray. I'm now in the\nprocess to clean remaining prepared xact locks.\n\nRegards,\n\n\n",
"msg_date": "Fri, 3 Jul 2020 00:12:10 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nAnother summary + patch + tests.\n\nThis patch supports 2PC. The goal is to keep them safe during demote/promote\nactions so they can be committed/rollbacked later on a primary. See tests.\n\nThe checkpointer is now shutdowned after the demote shutdown checkpoint. It\nremoves some useless code complexity, eg. avoiding to signal postmaster from\ncheckpointer to keep going with the demotion. \n\nCascaded replication is now supported. Wal senders stay actives during\ndemotion but set their local \"am_cascading_walsender = true\". It has been a\nrough debug session (thank you rr and tests!) on my side, but it might deserve\nit. I believe they should stay connected during the demote actions for futur\nfeatures, eg. triggering a switchover over the replication protocol using an\nadmin function.\n\nThe first tests has been added in \"recovery/t/021_promote-demote.pl\". I'll add\nsome more tests in futur versions.\n\nI believe the patch is ready for some preliminary tests and advice or\ndirections.\n\nOn my todo:\n\n* study how to only disconnect or cancel active RW backends\n * ...then add pg_demote() admin function\n* cancel running checkpoint for fast demote ?\n* user documentation\n* Robert's concern about snapshot during hot standby\n* some more coding style cleanup/refactoring\n* anything else reported to me :)\n\nThanks,\n\nOn Fri, 3 Jul 2020 00:12:10 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> Hi,\n> \n> Here is a small activity summary since last report.\n> \n> On Thu, 25 Jun 2020 19:27:54 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> [...]\n> > I hadn't time to investigate Robert's concern about shared memory for\n> > snapshot during recovery.\n> \n> I hadn't time to dig very far, but I suppose this might be related to the\n> comment in ProcArrayShmemSize(). If I'm right, then it seems the space is\n> already allocated as long as hot_standby is enabled. I realize it doesn't\n> means we are on the safe side of the fence though. I still have to have a\n> better understanding on this.\n> \n> > The patch doesn't deal with prepared xact yet. Testing\n> > \"start->demote->promote\" raise an assert if some prepared xact exist. I\n> > suppose I will rollback them during demote in next patch version.\n> \n> Rollback all prepared transaction on demote seems easy. However, I realized\n> there's no point to cancel them. After the demote action, they might still be\n> committed later on a promoted instance.\n> \n> I am currently trying to clean shared memory for existing prepared transaction\n> so they are handled by the startup process during recovery.\n> I've been able to clean TwoPhaseState and the ProcArray. I'm now in the\n> process to clean remaining prepared xact locks.\n> \n> Regards,\n> \n> \n\n\n\n-- \nJehan-Guillaume de Rorthais\nDalibo",
"msg_date": "Mon, 13 Jul 2020 17:04:49 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Mon, Jul 13, 2020 at 8:35 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> Hi,\n>\n> Another summary + patch + tests.\n>\n> This patch supports 2PC. The goal is to keep them safe during demote/promote\n> actions so they can be committed/rollbacked later on a primary. See tests.\n>\n\nWondering is it necessary to clear prepared transactions from shared memory?\nCan't simply skip clearing and restoring prepared transactions while demoting?\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 14 Jul 2020 17:26:37 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nOn 2020-07-14 17:26:37 +0530, Amul Sul wrote:\n> On Mon, Jul 13, 2020 at 8:35 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> >\n> > Hi,\n> >\n> > Another summary + patch + tests.\n> >\n> > This patch supports 2PC. The goal is to keep them safe during demote/promote\n> > actions so they can be committed/rollbacked later on a primary. See tests.\n> >\n> \n> Wondering is it necessary to clear prepared transactions from shared memory?\n> Can't simply skip clearing and restoring prepared transactions while demoting?\n\nRecovery doesn't use the normal PGXACT/PGPROC mechanisms to store\ntransaction state. So I don't think it'd be correct to leave them around\nin their previous state. We'd likely end up with incorrect snapshots\nif a demoted node later gets promoted...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Jul 2020 12:49:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Tue, 14 Jul 2020 12:49:51 -0700\nAndres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n> \n> On 2020-07-14 17:26:37 +0530, Amul Sul wrote:\n> > On Mon, Jul 13, 2020 at 8:35 PM Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote: \n> > >\n> > > Hi,\n> > >\n> > > Another summary + patch + tests.\n> > >\n> > > This patch supports 2PC. The goal is to keep them safe during\n> > > demote/promote actions so they can be committed/rollbacked later on a\n> > > primary. See tests. \n> > \n> > Wondering is it necessary to clear prepared transactions from shared memory?\n> > Can't simply skip clearing and restoring prepared transactions while\n> > demoting? \n> \n> Recovery doesn't use the normal PGXACT/PGPROC mechanisms to store\n> transaction state. So I don't think it'd be correct to leave them around\n> in their previous state. We'd likely end up with incorrect snapshots\n> if a demoted node later gets promoted...\n\nIndeed. I experienced it while debugging. PGXACT/PGPROC/locks need to\nbe cleared.\n\n\n",
"msg_date": "Tue, 14 Jul 2020 23:16:34 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nYet another summary + patch + tests.\n\nDemote now keeps backends with no active xid alive. Smart mode keeps all\nbackends: it waits for them to finish their xact and enter read-only. Fast\nmode terminate backends wit an active xid and keeps all other ones.\nBackends enters \"read-only\" using LocalXLogInsertAllowed=0 and flip it to -1\n(check recovery state) once demoted.\nDuring demote, no new session is allowed.\n\nAs backends with no active xid survive, a new SQL admin function\n\"pg_demote(fast bool, wait bool, wait_seconds int)\" had been added.\n\nDemote now relies on sigusr1 instead of hijacking sigterm/sigint and pmdie().\nThe resulting refactoring makes the code much simpler, cleaner, with better\nisolation of actions from the code point of view.\n\nThanks to the refactoring, the patch now only adds one state to the state\nmachine: PM_DEMOTING. A second one could be use to replace:\n\n /* Demoting: start the Startup Process */\n if (DemoteSignal && pmState == PM_SHUTDOWN && CheckpointerPID == 0)\n\nwith eg.:\n\n if (pmState == PM_DEMOTED)\n\nI believe it might be a bit simpler to understand, but the existing comment\nmight be good enough as well. The full state machine path for demote is:\n\n PM_DEMOTING /* wait for active xid backend to finish */\n PM_SHUTDOWN /* wait for checkpoint shutdown and its \n various shutdown tasks */\n PM_SHUTDOWN && !CheckpointerPID /* aka PM_DEMOTED: start Startup process */\n PM_STARTUP\n\nTests in \"recovery/t/021_promote-demote.pl\" grows from 13 to 24 tests,\nadding tests on backend behaviors during demote and new function pg_demote().\n\nOn my todo:\n\n* cancel running checkpoint for fast demote ?\n* forbid demote when PITR backup is in progress\n* user documentation\n* Robert's concern about snapshot during hot standby\n* anything else reported to me\n\nPlus, I might be able to split the backend part and their signals of the patch\n0002 in its own patch if it helps the review. It would apply after 0001 and\nbefore actual 0002.\n\nAs there was no consensus and the discussions seemed to conclude this patch set\nshould keep growing to see were it goes, I wonder if/when I should add it to\nthe commitfest. Advice? Opinion?\n\nRegards,",
"msg_date": "Wed, 5 Aug 2020 00:04:53 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "Hi,\n\nPlease find in attachment v5 of the patch set rebased on master after various\nconflicts.\n\nRegards,\n\nOn Wed, 5 Aug 2020 00:04:53 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> Demote now keeps backends with no active xid alive. Smart mode keeps all\n> backends: it waits for them to finish their xact and enter read-only. Fast\n> mode terminate backends wit an active xid and keeps all other ones.\n> Backends enters \"read-only\" using LocalXLogInsertAllowed=0 and flip it to -1\n> (check recovery state) once demoted.\n> During demote, no new session is allowed.\n> \n> As backends with no active xid survive, a new SQL admin function\n> \"pg_demote(fast bool, wait bool, wait_seconds int)\" had been added.\n> \n> Demote now relies on sigusr1 instead of hijacking sigterm/sigint and pmdie().\n> The resulting refactoring makes the code much simpler, cleaner, with better\n> isolation of actions from the code point of view.\n> \n> Thanks to the refactoring, the patch now only adds one state to the state\n> machine: PM_DEMOTING. A second one could be use to replace:\n> \n> /* Demoting: start the Startup Process */\n> if (DemoteSignal && pmState == PM_SHUTDOWN && CheckpointerPID == 0)\n> \n> with eg.:\n> \n> if (pmState == PM_DEMOTED)\n> \n> I believe it might be a bit simpler to understand, but the existing comment\n> might be good enough as well. The full state machine path for demote is:\n> \n> PM_DEMOTING /* wait for active xid backend to finish */\n> PM_SHUTDOWN /* wait for checkpoint shutdown and its \n> various shutdown tasks */\n> PM_SHUTDOWN && !CheckpointerPID /* aka PM_DEMOTED: start Startup process */\n> PM_STARTUP\n> \n> Tests in \"recovery/t/021_promote-demote.pl\" grows from 13 to 24 tests,\n> adding tests on backend behaviors during demote and new function pg_demote().\n> \n> On my todo:\n> \n> * cancel running checkpoint for fast demote ?\n> * forbid demote when PITR backup is in progress\n> * user documentation\n> * Robert's concern about snapshot during hot standby\n> * anything else reported to me\n> \n> Plus, I might be able to split the backend part and their signals of the patch\n> 0002 in its own patch if it helps the review. It would apply after 0001 and\n> before actual 0002.\n> \n> As there was no consensus and the discussions seemed to conclude this patch\n> set should keep growing to see were it goes, I wonder if/when I should add it\n> to the commitfest. Advice? Opinion?",
"msg_date": "Tue, 18 Aug 2020 17:41:31 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
},
{
"msg_contents": "On Tue, 18 Aug 2020 17:41:31 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> Hi,\n> \n> Please find in attachment v5 of the patch set rebased on master after various\n> conflicts.\n> \n> Regards,\n> \n> On Wed, 5 Aug 2020 00:04:53 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> > Demote now keeps backends with no active xid alive. Smart mode keeps all\n> > backends: it waits for them to finish their xact and enter read-only. Fast\n> > mode terminate backends wit an active xid and keeps all other ones.\n> > Backends enters \"read-only\" using LocalXLogInsertAllowed=0 and flip it to -1\n> > (check recovery state) once demoted.\n> > During demote, no new session is allowed.\n> > \n> > As backends with no active xid survive, a new SQL admin function\n> > \"pg_demote(fast bool, wait bool, wait_seconds int)\" had been added.\n\nJust to keep the list inform, I found a race condition leading to backends\ntrying to write to XLog after they processed the demote signal. Eg.:\n\n [posmaster] LOG: all backends in read only\n [checkpointer] LOG: demoting\n [backend] PANIC: cannot make new WAL entries during recovery\n STATEMENT: UPDATE pgbench_accounts [...]\n\nBecause of this Postmaster enters in crash recovery while demote\nenvironnement is in progress.\n\nI have a couple of other subjects right now, but I plan to get back to it soon.\n\nRegards,\n\n\n",
"msg_date": "Tue, 1 Sep 2020 11:23:05 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] demote"
}
] |
[
{
"msg_contents": "Hello, colleagues,\n\nI have one question related to my current measurements results.\n\nI am fetching integers in text format like:\n\nSelect * from table limit 10000000. It take 18.5 seconds to finish and the\ntransfer data is 633 MB.\n\nWhen I fetching the same data using binary cursor, the transfer data is 480\nMB, but the transfer time is 21.5 seconds?\n\nSo, I have one question could someone explain what can be the reason why\nthe transferring time for binary data is higher?\n\nBest regards,\n\nHello, colleagues,I have one question related to my current measurements results.I am fetching integers in text format like:Select * from table limit 10000000. It take 18.5 seconds to finish and the transfer data is 633 MB. When I fetching the same data using binary cursor, the transfer data is 480 MB, but the transfer time is 21.5 seconds?So, I have one question could someone explain what can be the reason why the transferring time for binary data is higher?Best regards,",
"msg_date": "Wed, 17 Jun 2020 11:55:44 -0700",
"msg_from": "Aleksei Ivanov <iv.alekseii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Binary transfer vs Text transfer"
},
{
"msg_contents": "Hi,\n\n\nOn 2020-06-17 11:55:44 -0700, Aleksei Ivanov wrote:\n> I have one question related to my current measurements results.\n> \n> I am fetching integers in text format like:\n> \n> Select * from table limit 10000000. It take 18.5 seconds to finish and the\n> transfer data is 633 MB.\n> \n> When I fetching the same data using binary cursor, the transfer data is 480\n> MB, but the transfer time is 21.5 seconds?\n> \n> So, I have one question could someone explain what can be the reason why\n> the transferring time for binary data is higher?\n\nThis thread might be interesting:\nhttps://www.postgresql.org/message-id/CAMkU%3D1whbRDUwa4eayD9%2B59K-coxO9senDkPRbTn3cg0pUz4AQ%40mail.gmail.com\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jun 2020 12:12:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary transfer vs Text transfer"
},
{
"msg_contents": "Hi,\n\nThanks for the attached link, but I also noticed using iftop, that during\nfetching the data there is almost no delay using text transfer, while there\nis several seconds in delay before data is starting fetching using binary\ntransfer. Could you suggest where can I have a look to resolve it or\ndecrease that time?\n\nBest regards,\n\n\nOn Wed, Jun 17, 2020 at 12:12 Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n>\n> On 2020-06-17 11:55:44 -0700, Aleksei Ivanov wrote:\n> > I have one question related to my current measurements results.\n> >\n> > I am fetching integers in text format like:\n> >\n> > Select * from table limit 10000000. It take 18.5 seconds to finish and\n> the\n> > transfer data is 633 MB.\n> >\n> > When I fetching the same data using binary cursor, the transfer data is\n> 480\n> > MB, but the transfer time is 21.5 seconds?\n> >\n> > So, I have one question could someone explain what can be the reason why\n> > the transferring time for binary data is higher?\n>\n> This thread might be interesting:\n>\n> https://www.postgresql.org/message-id/CAMkU%3D1whbRDUwa4eayD9%2B59K-coxO9senDkPRbTn3cg0pUz4AQ%40mail.gmail.com\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi,Thanks for the attached link, but I also noticed using iftop, that during fetching the data there is almost no delay using text transfer, while there is several seconds in delay before data is starting fetching using binary transfer. Could you suggest where can I have a look to resolve it or decrease that time? Best regards,On Wed, Jun 17, 2020 at 12:12 Andres Freund <andres@anarazel.de> wrote:Hi,\n\n\nOn 2020-06-17 11:55:44 -0700, Aleksei Ivanov wrote:\n> I have one question related to my current measurements results.\n> \n> I am fetching integers in text format like:\n> \n> Select * from table limit 10000000. It take 18.5 seconds to finish and the\n> transfer data is 633 MB.\n> \n> When I fetching the same data using binary cursor, the transfer data is 480\n> MB, but the transfer time is 21.5 seconds?\n> \n> So, I have one question could someone explain what can be the reason why\n> the transferring time for binary data is higher?\n\nThis thread might be interesting:\nhttps://www.postgresql.org/message-id/CAMkU%3D1whbRDUwa4eayD9%2B59K-coxO9senDkPRbTn3cg0pUz4AQ%40mail.gmail.com\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 17 Jun 2020 22:19:29 -0700",
"msg_from": "Aleksei Ivanov <iv.alekseii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary transfer vs Text transfer"
}
] |
[
{
"msg_contents": "This is a follow-up to Bug # 16492 which also links to a thread sent to\n-hackers back in 2018.\n\nI'm firmly of the belief that the existing behavior of DROP relation IF\nEXISTS is flawed - it should not be an error if there is a namespace\ncollision but the relkind of the existing relation doesn't match the\nrelkind set by the DROP command.\n\nSince our documentation fails to elaborate on any additional behavior, and\nuses the relkind in the description, our users (few as they may be) are\nrightly calling this a bug. I loosely believe that any behavior change in\nthis area should not be back-patched thus for released versions this is a\ndocumentation bug. I have attached a patch to fix that bug.\n\nIn putting together the patch I noticed that the existing drop_if_exists\nregression tests exercise the DROP DOMAIN command. Out of curiosity I\nincluded that in my namespace testing and discovered that DROP DOMAIN\nthinks of itself as being a relation for purposes of IF EXISTS but DROP\nTABLE does not. I modified both DROP DOMAIN and the Glossary in response\nto this finding - though I suspect to find disagreement with my choice. I\nlooked at pg_class for some guidance but a quick search for RELKIND_\n(DOMAIN) and finding nothing decided I didn't know enough and figured to\npunt on any further exploration of this inconsistency.\n\nThe documentation and tests need to go in and be back-patched. After that\nhappens I'll see whether and/or how to go about trying to get my PoV on the\nbehavioral change committed.\n\nDavid J.",
"msg_date": "Wed, 17 Jun 2020 15:47:20 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I'm firmly of the belief that the existing behavior of DROP relation IF\n> EXISTS is flawed - it should not be an error if there is a namespace\n> collision but the relkind of the existing relation doesn't match the\n> relkind set by the DROP command.\n\nI don't particularly agree, as I said in the other thread. The core\npoint here is that it's not clear to me why the specific error of\n\"wrong relkind\" deserves a pass, while other errors such as \"you're\nnot the owner\" don't. Both of those cases suggest that you're not\ntargeting the relation you think you are, and both of them would get\nin the way of a subsequent CREATE. To me, success of DROP IF EXISTS\nshould mean \"the coast is clear to do a CREATE\". With an exception\nlike this, a success would mean nothing at all.\n\nAnother point here is that we have largely the same issue with respect\nto different subclasses of routines (functions/procedures/aggregates)\nand types (base types/composite types/domains). If we do change\nsomething then I'd want to see it done consistently across all these\ncases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 19:32:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Wed, Jun 17, 2020 at 4:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > I'm firmly of the belief that the existing behavior of DROP relation IF\n> > EXISTS is flawed - it should not be an error if there is a namespace\n> > collision but the relkind of the existing relation doesn't match the\n> > relkind set by the DROP command.\n>\n>\nThe other thread:\n\nhttps://www.postgresql.org/message-id/CAKFQuwY90%3DGSX_65cYdAm18TWCv4CvnPdHCuH92qfzKSYaFnxQ%40mail.gmail.com\n\nI don't particularly agree, as I said in the other thread. The core\n> point here is that it's not clear to me why the specific error of\n> \"wrong relkind\" deserves a pass, while other errors such as \"you're\n> not the owner\" don't.\n\n\nBecause if you're not the owner then by definition the expected target\nexists and a drop is attempted - which can still fail.\n\n Both of those cases suggest that you're not\n> targeting the relation you think you are, and both of them would get\n> in the way of a subsequent CREATE.\n\n\nAgreed, as noted on the other thread we actually are not sufficiently\nparanoid in this situation. Specifically, we allow dropping a relation\nbased upon a search_path search when the target it not on the first entry\nin the search_path. I'd be glad to see that hole closed up - but this is\nstill broken even when the name is always schema qualified.\n\n To me, success of DROP IF EXISTS\n> should mean \"the coast is clear to do a CREATE\". With an exception\n> like this, a success would mean nothing at all.\n>\n\nTo me and at least some users DROP IF EXISTS means that the specific object\nI specified no longer exists, period.\n\nIf you want access to the behavior you describe go and write DROP ROUTINE.\nAs noted on the other thread I think that is a bad option but hey, it does\nhave the benefit of doing exactly what you describe.\n\nUsers can write multiple the drop commands necessary to get their create\ncommand to execute successfully. If the create command fails they can\nreact to that and figure out where their misunderstanding was. Is that\nreally so terrible?\n\nAnother point here is that we have largely the same issue with respect\n> to different subclasses of routines (functions/procedures/aggregates)\n> and types (base types/composite types/domains). If we do change\n> something then I'd want to see it done consistently across all these\n> cases.\n\n\nOk. I don't necessarily disagree. In fact the patch I submitted, which is\nthe on-topic discussion for this thread, brings up the very point that\ndomain behavior here is presently inconsistent.\n\nAt least for DROP TABLE IF EXISTS if we close up the hole with search_path\nresolution by introducing an actual \"found relation in the wrong location\"\nerror then the risk will have been removed - which exists outside of the IF\nEXISTS logic - and instead of not dropping a table and throwing an error we\njust are not dropping a table.\n\nSo, in summary, this thread is to document the current behavior [actual doc\nbug fix]. There is probably another thread buried in all of this for going\nthrough and finding other undocumented behaviors for other object types\n[potential doc bug fixes]. Then a thread for solidifying search_path\nhandling to actually fill in missing seemingly desirable safety features to\navoid drop target mis-identification (so we don't actually drop the wrong\nobject) [feature]. Then a thread to discuss whether or not dropping an\nobject that wasn't of the relkind that user specified should be an error\n[bug fix held up due to insufficient safety features]. Then a thread to\ndiscuss DROP ROUTINE [user choice of convenience over safety].\n\nDavid J.\n\nOn Wed, Jun 17, 2020 at 4:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I'm firmly of the belief that the existing behavior of DROP relation IF\n> EXISTS is flawed - it should not be an error if there is a namespace\n> collision but the relkind of the existing relation doesn't match the\n> relkind set by the DROP command.\nThe other thread:https://www.postgresql.org/message-id/CAKFQuwY90%3DGSX_65cYdAm18TWCv4CvnPdHCuH92qfzKSYaFnxQ%40mail.gmail.com\nI don't particularly agree, as I said in the other thread. The core\npoint here is that it's not clear to me why the specific error of\n\"wrong relkind\" deserves a pass, while other errors such as \"you're\nnot the owner\" don't.Because if you're not the owner then by definition the expected target exists and a drop is attempted - which can still fail. Both of those cases suggest that you're not\ntargeting the relation you think you are, and both of them would get\nin the way of a subsequent CREATE.Agreed, as noted on the other thread we actually are not sufficiently paranoid in this situation. Specifically, we allow dropping a relation based upon a search_path search when the target it not on the first entry in the search_path. I'd be glad to see that hole closed up - but this is still broken even when the name is always schema qualified. To me, success of DROP IF EXISTS\nshould mean \"the coast is clear to do a CREATE\". With an exception\nlike this, a success would mean nothing at all.To me and at least some users DROP IF EXISTS means that the specific object I specified no longer exists, period.If you want access to the behavior you describe go and write DROP ROUTINE. As noted on the other thread I think that is a bad option but hey, it does have the benefit of doing exactly what you describe.Users can write multiple the drop commands necessary to get their create command to execute successfully. If the create command fails they can react to that and figure out where their misunderstanding was. Is that really so terrible?\nAnother point here is that we have largely the same issue with respect\nto different subclasses of routines (functions/procedures/aggregates)\nand types (base types/composite types/domains). If we do change\nsomething then I'd want to see it done consistently across all these\ncases.Ok. I don't necessarily disagree. In fact the patch I submitted, which is the on-topic discussion for this thread, brings up the very point that domain behavior here is presently inconsistent.At least for DROP TABLE IF EXISTS if we close up the hole with search_path resolution by introducing an actual \"found relation in the wrong location\" error then the risk will have been removed - which exists outside of the IF EXISTS logic - and instead of not dropping a table and throwing an error we just are not dropping a table.So, in summary, this thread is to document the current behavior [actual doc bug fix]. There is probably another thread buried in all of this for going through and finding other undocumented behaviors for other object types [potential doc bug fixes]. Then a thread for solidifying search_path handling to actually fill in missing seemingly desirable safety features to avoid drop target mis-identification (so we don't actually drop the wrong object) [feature]. Then a thread to discuss whether or not dropping an object that wasn't of the relkind that user specified should be an error [bug fix held up due to insufficient safety features]. Then a thread to discuss DROP ROUTINE [user choice of convenience over safety].David J.",
"msg_date": "Wed, 17 Jun 2020 17:17:26 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "Hi\n\nčt 18. 6. 2020 v 0:47 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> This is a follow-up to Bug # 16492 which also links to a thread sent to\n> -hackers back in 2018.\n>\n> I'm firmly of the belief that the existing behavior of DROP relation IF\n> EXISTS is flawed - it should not be an error if there is a namespace\n> collision but the relkind of the existing relation doesn't match the\n> relkind set by the DROP command.\n>\n> Since our documentation fails to elaborate on any additional behavior, and\n> uses the relkind in the description, our users (few as they may be) are\n> rightly calling this a bug. I loosely believe that any behavior change in\n> this area should not be back-patched thus for released versions this is a\n> documentation bug. I have attached a patch to fix that bug.\n>\n> In putting together the patch I noticed that the existing drop_if_exists\n> regression tests exercise the DROP DOMAIN command. Out of curiosity I\n> included that in my namespace testing and discovered that DROP DOMAIN\n> thinks of itself as being a relation for purposes of IF EXISTS but DROP\n> TABLE does not. I modified both DROP DOMAIN and the Glossary in response\n> to this finding - though I suspect to find disagreement with my choice. I\n> looked at pg_class for some guidance but a quick search for RELKIND_\n> (DOMAIN) and finding nothing decided I didn't know enough and figured to\n> punt on any further exploration of this inconsistency.\n>\n> The documentation and tests need to go in and be back-patched. After that\n> happens I'll see whether and/or how to go about trying to get my PoV on the\n> behavioral change committed.\n>\n\nI am reading this patch. I don't think so text for domains and types are\ncorrect (or minimally it is little bit messy)\n\n+ This parameter instructs <productname>PostgreSQL</productname> to\nsearch\n+ for the first instance of any relation with the provided name.\n+ If no relations are found a notice is issued and the command ends.\n+ If a relation is found then one of two things will happen:\n+ if the relation is an domain it is dropped, otherwise the command\nfails.\n\n\"If no relations are found ...\".\n\nThis case is a little bit more complex - domains are not subset of\nrelations. But relations (in Postgres) extends types.\n\nSo in this case maybe modified text can be better\n\n+ This parameter instructs <productname>PostgreSQL</productname> to\nsearch\n+ for the first instance of any domain with the provided name in\npg_type catalog.\n+ If no type is found a notice is issued and the command ends.\n+ If a type is found then one of two things will happen:\n+ if the type is a domain it is dropped, otherwise the command fails.\nPostgres knows\n+ base types, composite types, relation related types and domain types.\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> David J.\n>\n>\n\nHičt 18. 6. 2020 v 0:47 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:This is a follow-up to Bug # 16492 which also links to a thread sent to -hackers back in 2018.I'm firmly of the belief that the existing behavior of DROP relation IF EXISTS is flawed - it should not be an error if there is a namespace collision but the relkind of the existing relation doesn't match the relkind set by the DROP command.Since our documentation fails to elaborate on any additional behavior, and uses the relkind in the description, our users (few as they may be) are rightly calling this a bug. I loosely believe that any behavior change in this area should not be back-patched thus for released versions this is a documentation bug. I have attached a patch to fix that bug.In putting together the patch I noticed that the existing drop_if_exists regression tests exercise the DROP DOMAIN command. Out of curiosity I included that in my namespace testing and discovered that DROP DOMAIN thinks of itself as being a relation for purposes of IF EXISTS but DROP TABLE does not. I modified both DROP DOMAIN and the Glossary in response to this finding - though I suspect to find disagreement with my choice. I looked at pg_class for some guidance but a quick search for RELKIND_ (DOMAIN) and finding nothing decided I didn't know enough and figured to punt on any further exploration of this inconsistency.The documentation and tests need to go in and be back-patched. After that happens I'll see whether and/or how to go about trying to get my PoV on the behavioral change committed.I am reading this patch. I don't think so text for domains and types are correct (or minimally it is little bit messy)+ This parameter instructs <productname>PostgreSQL</productname> to search + for the first instance of any relation with the provided name.+ If no relations are found a notice is issued and the command ends.+ If a relation is found then one of two things will happen:+ if the relation is an domain it is dropped, otherwise the command fails.\"If no relations are found ...\". This case is a little bit more complex - domains are not subset of relations. But relations (in Postgres) extends types.So in this case maybe modified text can be better+ This parameter instructs <productname>PostgreSQL</productname> to search + for the first instance of any domain with the provided name in pg_type catalog.+ If no type is found a notice is issued and the command ends.+ If a type is found then one of two things will happen:+ if the type is a domain it is dropped, otherwise the command fails. Postgres knows+ base types, composite types, relation related types and domain types.RegardsPavel David J.",
"msg_date": "Mon, 13 Jul 2020 11:11:54 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "po 13. 7. 2020 v 11:11 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> čt 18. 6. 2020 v 0:47 odesílatel David G. Johnston <\n> david.g.johnston@gmail.com> napsal:\n>\n>> This is a follow-up to Bug # 16492 which also links to a thread sent to\n>> -hackers back in 2018.\n>>\n>> I'm firmly of the belief that the existing behavior of DROP relation IF\n>> EXISTS is flawed - it should not be an error if there is a namespace\n>> collision but the relkind of the existing relation doesn't match the\n>> relkind set by the DROP command.\n>>\n>> Since our documentation fails to elaborate on any additional behavior,\n>> and uses the relkind in the description, our users (few as they may be) are\n>> rightly calling this a bug. I loosely believe that any behavior change in\n>> this area should not be back-patched thus for released versions this is a\n>> documentation bug. I have attached a patch to fix that bug.\n>>\n>> In putting together the patch I noticed that the existing drop_if_exists\n>> regression tests exercise the DROP DOMAIN command. Out of curiosity I\n>> included that in my namespace testing and discovered that DROP DOMAIN\n>> thinks of itself as being a relation for purposes of IF EXISTS but DROP\n>> TABLE does not. I modified both DROP DOMAIN and the Glossary in response\n>> to this finding - though I suspect to find disagreement with my choice. I\n>> looked at pg_class for some guidance but a quick search for RELKIND_\n>> (DOMAIN) and finding nothing decided I didn't know enough and figured to\n>> punt on any further exploration of this inconsistency.\n>>\n>> The documentation and tests need to go in and be back-patched. After\n>> that happens I'll see whether and/or how to go about trying to get my PoV\n>> on the behavioral change committed.\n>>\n>\n> I am reading this patch. I don't think so text for domains and types are\n> correct (or minimally it is little bit messy)\n>\n> + This parameter instructs <productname>PostgreSQL</productname> to\n> search\n> + for the first instance of any relation with the provided name.\n> + If no relations are found a notice is issued and the command ends.\n> + If a relation is found then one of two things will happen:\n> + if the relation is an domain it is dropped, otherwise the command\n> fails.\n>\n> \"If no relations are found ...\".\n>\n> This case is a little bit more complex - domains are not subset of\n> relations. But relations (in Postgres) extends types.\n>\n> So in this case maybe modified text can be better\n>\n> + This parameter instructs <productname>PostgreSQL</productname> to\n> search\n> + for the first instance of any domain with the provided name in\n> pg_type catalog.\n> + If no type is found a notice is issued and the command ends.\n> + If a type is found then one of two things will happen:\n> + if the type is a domain it is dropped, otherwise the command fails.\n> Postgres knows\n> + base types, composite types, relation related types and domain\n> types.\n>\n\ncreate type footyp as (a int, b int);\npostgres=# drop domain if exists footyp;\nERROR: \"footyp\" is not a domain\npostgres=#\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>>\n>> David J.\n>>\n>>\n\npo 13. 7. 2020 v 11:11 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hičt 18. 6. 2020 v 0:47 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:This is a follow-up to Bug # 16492 which also links to a thread sent to -hackers back in 2018.I'm firmly of the belief that the existing behavior of DROP relation IF EXISTS is flawed - it should not be an error if there is a namespace collision but the relkind of the existing relation doesn't match the relkind set by the DROP command.Since our documentation fails to elaborate on any additional behavior, and uses the relkind in the description, our users (few as they may be) are rightly calling this a bug. I loosely believe that any behavior change in this area should not be back-patched thus for released versions this is a documentation bug. I have attached a patch to fix that bug.In putting together the patch I noticed that the existing drop_if_exists regression tests exercise the DROP DOMAIN command. Out of curiosity I included that in my namespace testing and discovered that DROP DOMAIN thinks of itself as being a relation for purposes of IF EXISTS but DROP TABLE does not. I modified both DROP DOMAIN and the Glossary in response to this finding - though I suspect to find disagreement with my choice. I looked at pg_class for some guidance but a quick search for RELKIND_ (DOMAIN) and finding nothing decided I didn't know enough and figured to punt on any further exploration of this inconsistency.The documentation and tests need to go in and be back-patched. After that happens I'll see whether and/or how to go about trying to get my PoV on the behavioral change committed.I am reading this patch. I don't think so text for domains and types are correct (or minimally it is little bit messy)+ This parameter instructs <productname>PostgreSQL</productname> to search + for the first instance of any relation with the provided name.+ If no relations are found a notice is issued and the command ends.+ If a relation is found then one of two things will happen:+ if the relation is an domain it is dropped, otherwise the command fails.\"If no relations are found ...\". This case is a little bit more complex - domains are not subset of relations. But relations (in Postgres) extends types.So in this case maybe modified text can be better+ This parameter instructs <productname>PostgreSQL</productname> to search + for the first instance of any domain with the provided name in pg_type catalog.+ If no type is found a notice is issued and the command ends.+ If a type is found then one of two things will happen:+ if the type is a domain it is dropped, otherwise the command fails. Postgres knows+ base types, composite types, relation related types and domain types.create type footyp as (a int, b int);postgres=# drop domain if exists footyp;ERROR: \"footyp\" is not a domainpostgres=# RegardsPavel David J.",
"msg_date": "Mon, 13 Jul 2020 11:26:26 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Mon, Jul 13, 2020 at 2:12 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> I am reading this patch. I don't think so text for domains and types are\n> correct (or minimally it is little bit messy)\n> This case is a little bit more complex - domains are not subset of\n> relations. But relations (in Postgres) extends types.\n>\n\nYeah, though in further working on this I dislike the saying \"A composite\ntype is a relation\" (see Glossary and probably other spots). That a table\nauto-creates a separate composite type, and depends on it, manifests a\ncertain link between the two but the type that represents the table is not\na relation as it doesn't hold data, it is just a definition. If a\ncomposite type were a relation then whatever argument you use to justify\nthat would seem to apply to non-composite types as well.\n\nI'm attaching version 2 as a plain diff (complete) instead of a patch.\n\nNew with this version is the addition of tests for drop domain and drop\ntype, and related documentation changes. Notably pointing out the fact\nthat DROP TYPE drops all types, including domains.\n\nTo recap, the interesting relation related behaviors these tests\ndemonstrate are:\n\nA non-failure while performing a DROP \"relation\" IF EXISTS command means\nthat a subsequent CREATE \"relation\" command will not fail due to the name\nalready existing (other failures are of course possible).\n\nIn the presence of multiple schemas a failure of a DROP \"relation\" IF\nEXISTS command does not necessarily mean that an corresponding CREATE\n\"relation\" command would fail - the found entry could belong to a non-first\nschema on the search_path while the creation will place the newly created\nobject always on the first schema.\n\nThe plain meaning of the opposite of \"DROP IF EXISTS\" (i.e., it's not an\nerror if the specified object doesn't exist, just move on) is not what\nactually happens but rather we provide an additional test related to\nnamespace occupation that is now documented.\n\nThe latter two items are explicitly documented while the first is implicit\nand self-evident.\n\nDavid J.",
"msg_date": "Mon, 13 Jul 2020 15:37:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "út 14. 7. 2020 v 0:37 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Mon, Jul 13, 2020 at 2:12 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> I am reading this patch. I don't think so text for domains and types are\n>> correct (or minimally it is little bit messy)\n>> This case is a little bit more complex - domains are not subset of\n>> relations. But relations (in Postgres) extends types.\n>>\n>\n> Yeah, though in further working on this I dislike the saying \"A composite\n> type is a relation\" (see Glossary and probably other spots). That a table\n> auto-creates a separate composite type, and depends on it, manifests a\n> certain link between the two but the type that represents the table is not\n> a relation as it doesn't hold data, it is just a definition. If a\n> composite type were a relation then whatever argument you use to justify\n> that would seem to apply to non-composite types as well.\n>\n> I'm attaching version 2 as a plain diff (complete) instead of a patch.\n>\n> New with this version is the addition of tests for drop domain and drop\n> type, and related documentation changes. Notably pointing out the fact\n> that DROP TYPE drops all types, including domains.\n>\n> To recap, the interesting relation related behaviors these tests\n> demonstrate are:\n>\n> A non-failure while performing a DROP \"relation\" IF EXISTS command means\n> that a subsequent CREATE \"relation\" command will not fail due to the name\n> already existing (other failures are of course possible).\n>\n> In the presence of multiple schemas a failure of a DROP \"relation\" IF\n> EXISTS command does not necessarily mean that an corresponding CREATE\n> \"relation\" command would fail - the found entry could belong to a non-first\n> schema on the search_path while the creation will place the newly created\n> object always on the first schema.\n>\n> The plain meaning of the opposite of \"DROP IF EXISTS\" (i.e., it's not an\n> error if the specified object doesn't exist, just move on) is not what\n> actually happens but rather we provide an additional test related to\n> namespace occupation that is now documented.\n>\n> The latter two items are explicitly documented while the first is implicit\n> and self-evident.\n>\n\nI think so now all changes are correct and valuable. I''l mark this patch\nas ready for commit\n\nThank you for patch\n\nRegards\n\nPavel\n\n>\n> David J.\n>\n>\n\nút 14. 7. 2020 v 0:37 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Mon, Jul 13, 2020 at 2:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:I am reading this patch. I don't think so text for domains and types are correct (or minimally it is little bit messy)This case is a little bit more complex - domains are not subset of relations. But relations (in Postgres) extends types.Yeah, though in further working on this I dislike the saying \"A composite type is a relation\" (see Glossary and probably other spots). That a table auto-creates a separate composite type, and depends on it, manifests a certain link between the two but the type that represents the table is not a relation as it doesn't hold data, it is just a definition. If a composite type were a relation then whatever argument you use to justify that would seem to apply to non-composite types as well.I'm attaching version 2 as a plain diff (complete) instead of a patch.New with this version is the addition of tests for drop domain and drop type, and related documentation changes. Notably pointing out the fact that DROP TYPE drops all types, including domains.To recap, the interesting relation related behaviors these tests demonstrate are:A non-failure while performing a DROP\n\n\"relation\"\n\n IF EXISTS command means that a subsequent CREATE \"relation\" command will not fail due to the name already existing (other failures are of course possible).In the presence of multiple schemas a failure of a DROP \n\n\"relation\"\n\nIF EXISTS command does not necessarily mean that an corresponding CREATE \"relation\" command would fail - the found entry could belong to a non-first schema on the search_path while the creation will place the newly created object always on the first schema.The plain meaning of the opposite of \"DROP IF EXISTS\" (i.e., it's not an error if the specified object doesn't exist, just move on) is not what actually happens but rather we provide an additional test related to namespace occupation that is now documented.The latter two items are explicitly documented while the first is implicit and self-evident.I think so now all changes are correct and valuable. I''l mark this patch as ready for commitThank you for patchRegardsPavelDavid J.",
"msg_date": "Tue, 14 Jul 2020 07:25:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 07:25:56AM +0200, Pavel Stehule wrote:\n> �t 14. 7. 2020 v 0:37 odes�latel David G. Johnston <david.g.johnston@gmail.com> napsal:\n> > On Mon, Jul 13, 2020 at 2:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I think so now all changes are correct and valuable. I''l mark this patch\n> as ready for commit\n\nThis is failing relevant tests in cfbot:\n\n drop_if_exists ... FAILED 450 ms\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 14 Jul 2020 07:40:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 5:40 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Jul 14, 2020 at 07:25:56AM +0200, Pavel Stehule wrote:\n> > út 14. 7. 2020 v 0:37 odesílatel David G. Johnston <\n> david.g.johnston@gmail.com> napsal:\n> > > On Mon, Jul 13, 2020 at 2:12 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I think so now all changes are correct and valuable. I''l mark this patch\n> > as ready for commit\n>\n> This is failing relevant tests in cfbot:\n>\n> drop_if_exists ... FAILED 450 ms\n>\n>\nOops, did a minor whitespace cleanup in the test file and didn't re-copy\nexpected output. I'm actually going to try and clean up the commenting in\nthe test file a bit to make it easier to read, and split out the glossary\nchanges into their own diff so that the bulk of the changes can be\nback-patched.\n\nFurther comments welcome so I'm putting it back into needs review for the\nmoment while I work on the refactor.\n\nDavid J.\n\nOn Tue, Jul 14, 2020 at 5:40 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, Jul 14, 2020 at 07:25:56AM +0200, Pavel Stehule wrote:\n> út 14. 7. 2020 v 0:37 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:\n> > On Mon, Jul 13, 2020 at 2:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I think so now all changes are correct and valuable. I''l mark this patch\n> as ready for commit\n\nThis is failing relevant tests in cfbot:\n\n drop_if_exists ... FAILED 450 msOops, did a minor whitespace cleanup in the test file and didn't re-copy expected output. I'm actually going to try and clean up the commenting in the test file a bit to make it easier to read, and split out the glossary changes into their own diff so that the bulk of the changes can be back-patched.Further comments welcome so I'm putting it back into needs review for the moment while I work on the refactor.David J.",
"msg_date": "Tue, 14 Jul 2020 06:54:50 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "út 14. 7. 2020 v 15:55 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Tue, Jul 14, 2020 at 5:40 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n>> On Tue, Jul 14, 2020 at 07:25:56AM +0200, Pavel Stehule wrote:\n>> > út 14. 7. 2020 v 0:37 odesílatel David G. Johnston <\n>> david.g.johnston@gmail.com> napsal:\n>> > > On Mon, Jul 13, 2020 at 2:12 AM Pavel Stehule <\n>> pavel.stehule@gmail.com> wrote:\n>> > I think so now all changes are correct and valuable. I''l mark this\n>> patch\n>> > as ready for commit\n>>\n>> This is failing relevant tests in cfbot:\n>>\n>> drop_if_exists ... FAILED 450 ms\n>>\n>>\n> Oops, did a minor whitespace cleanup in the test file and didn't re-copy\n> expected output. I'm actually going to try and clean up the commenting in\n> the test file a bit to make it easier to read, and split out the glossary\n> changes into their own diff so that the bulk of the changes can be\n> back-patched.\n>\n> Further comments welcome so I'm putting it back into needs review for the\n> moment while I work on the refactor.\n>\n\nattached fixed patch\n\nall tests passed\ndoc build without problems\n\n\n> David J.\n>",
"msg_date": "Tue, 14 Jul 2020 15:56:16 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 6:56 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> út 14. 7. 2020 v 15:55 odesílatel David G. Johnston <\n> david.g.johnston@gmail.com> napsal:\n>\n>> Further comments welcome so I'm putting it back into needs review for the\n>> moment while I work on the refactor.\n>>\n>\n> attached fixed patch\n>\n> all tests passed\n> doc build without problems\n>\n\nThanks.\n\nActually, one question I didn't pose before, does the SQL standard define\nDROP TYPE to target domains while also providing for a DROP DOMAIN\ncommand? Do drop commands for the other types we have not exist because\nthose aren't SQL standard types (or the standard they are standard types\nbut the commands aren't defined)?\n\nDavid J.\n\nOn Tue, Jul 14, 2020 at 6:56 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 14. 7. 2020 v 15:55 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:Further comments welcome so I'm putting it back into needs review for the moment while I work on the refactor.attached fixed patchall tests passeddoc build without problemsThanks.Actually, one question I didn't pose before, does the SQL standard define DROP TYPE to target domains while also providing for a DROP DOMAIN command? Do drop commands for the other types we have not exist because those aren't SQL standard types (or the standard they are standard types but the commands aren't defined)?David J.",
"msg_date": "Tue, 14 Jul 2020 07:09:17 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "út 14. 7. 2020 v 16:09 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Tue, Jul 14, 2020 at 6:56 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> út 14. 7. 2020 v 15:55 odesílatel David G. Johnston <\n>> david.g.johnston@gmail.com> napsal:\n>>\n>>> Further comments welcome so I'm putting it back into needs review for\n>>> the moment while I work on the refactor.\n>>>\n>>\n>> attached fixed patch\n>>\n>> all tests passed\n>> doc build without problems\n>>\n>\n> Thanks.\n>\n> Actually, one question I didn't pose before, does the SQL standard define\n> DROP TYPE to target domains while also providing for a DROP DOMAIN\n> command? Do drop commands for the other types we have not exist because\n> those aren't SQL standard types (or the standard they are standard types\n> but the commands aren't defined)?\n>\n\nIt looks like Postgres user defined types are something else than ANSI SQL\n- so CREATE TYPE and DROP TYPE did different work.\n\nIn the section DROP TYPE in ANSI SQL there is not mentioned any relation to\ndomains.\n\n\n> David J.\n>\n\nút 14. 7. 2020 v 16:09 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Tue, Jul 14, 2020 at 6:56 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 14. 7. 2020 v 15:55 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:Further comments welcome so I'm putting it back into needs review for the moment while I work on the refactor.attached fixed patchall tests passeddoc build without problemsThanks.Actually, one question I didn't pose before, does the SQL standard define DROP TYPE to target domains while also providing for a DROP DOMAIN command? Do drop commands for the other types we have not exist because those aren't SQL standard types (or the standard they are standard types but the commands aren't defined)?It looks like Postgres user defined types are something else than ANSI SQL - so CREATE TYPE and DROP TYPE did different work.In the section DROP TYPE in ANSI SQL there is not mentioned any relation to domains. David J.",
"msg_date": "Tue, 14 Jul 2020 16:20:43 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 7:21 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> út 14. 7. 2020 v 16:09 odesílatel David G. Johnston <\n> david.g.johnston@gmail.com> napsal:\n>\n>> On Tue, Jul 14, 2020 at 6:56 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> út 14. 7. 2020 v 15:55 odesílatel David G. Johnston <\n>>> david.g.johnston@gmail.com> napsal:\n>>>\n>>>> Further comments welcome so I'm putting it back into needs review for\n>>>> the moment while I work on the refactor.\n>>>>\n>>>\n>>> attached fixed patch\n>>>\n>>> all tests passed\n>>> doc build without problems\n>>>\n>>\n>> Thanks.\n>>\n>> Actually, one question I didn't pose before, does the SQL standard define\n>> DROP TYPE to target domains while also providing for a DROP DOMAIN\n>> command? Do drop commands for the other types we have not exist because\n>> those aren't SQL standard types (or the standard they are standard types\n>> but the commands aren't defined)?\n>>\n>\n> It looks like Postgres user defined types are something else than ANSI SQL\n> - so CREATE TYPE and DROP TYPE did different work.\n>\n> In the section DROP TYPE in ANSI SQL there is not mentioned any relation\n> to domains.\n>\n\nAttaching a backpatch-able patch for the main docs and tests, v4\nAdded a head-only patch for the glossary changes, set to v4 as well.\n\nI didn't try and address any SQL standard dynamics here.\n\nDavid J.",
"msg_date": "Tue, 14 Jul 2020 09:01:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "Hi!\n\nI've skimmed through the thread and checked the patchset. Everything\nlooks good, except one paragraph, which doesn't look completely clear.\n\n+ <para>\n+ This emulates the functionality provided by\n+ <xref linkend=\"sql-createtype\"/> but sets the created object's\n+ <glossterm linkend=\"glossary-type-definition\">type definition</glossterm>\n+ to domain.\n+ </para>\n\nAs I get it states that CREATE DOMAIN somehow \"emulates\" CREATE TYPE.\nCould you please, rephrase it? It looks confusing to me yet.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 16 Sep 2020 01:48:32 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Sep 15, 2020 at 3:48 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hi!\n>\n> I've skimmed through the thread and checked the patchset. Everything\n> looks good, except one paragraph, which doesn't look completely clear.\n>\n> + <para>\n> + This emulates the functionality provided by\n> + <xref linkend=\"sql-createtype\"/> but sets the created object's\n> + <glossterm linkend=\"glossary-type-definition\">type\n> definition</glossterm>\n> + to domain.\n> + </para>\n>\n> As I get it states that CREATE DOMAIN somehow \"emulates\" CREATE TYPE.\n> Could you please, rephrase it? It looks confusing to me yet.\n>\n>\nI'll look at it.\n\nMy main point here is that writing \"CREATE TYPE typename AS DOMAIN\" would\nbe expected, with the appropriate sub-specification, similar to \"CREATE\nTYPE typename AS RANGE\". While the syntax wasn't rolled up into \"CREATE\nTYPE\" proper \"CREATE DOMAIN\" effectively does the same thing - creates a\ntype of domain (just ask CREATE TYPE AS RANGE creates a type of range).\nI'm calling \"a type of something\" the type's \"type domain\". CREATE DOMAIN\nemulates the non-existent \"CREATE TYPE typename AS DOMAIN\" command.\n\nDavid J.\n\nOn Tue, Sep 15, 2020 at 3:48 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:Hi!\n\nI've skimmed through the thread and checked the patchset. Everything\nlooks good, except one paragraph, which doesn't look completely clear.\n\n+ <para>\n+ This emulates the functionality provided by\n+ <xref linkend=\"sql-createtype\"/> but sets the created object's\n+ <glossterm linkend=\"glossary-type-definition\">type definition</glossterm>\n+ to domain.\n+ </para>\n\nAs I get it states that CREATE DOMAIN somehow \"emulates\" CREATE TYPE.\nCould you please, rephrase it? It looks confusing to me yet.I'll look at it.My main point here is that writing \"CREATE TYPE typename AS DOMAIN\" would be expected, with the appropriate sub-specification, similar to \"CREATE TYPE typename AS RANGE\". While the syntax wasn't rolled up into \"CREATE TYPE\" proper \"CREATE DOMAIN\" effectively does the same thing - creates a type of domain (just ask CREATE TYPE AS RANGE creates a type of range). I'm calling \"a type of something\" the type's \"type domain\". CREATE DOMAIN emulates the non-existent \"CREATE TYPE typename AS DOMAIN\" command.David J.",
"msg_date": "Wed, 16 Sep 2020 15:20:00 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> My main point here is that writing \"CREATE TYPE typename AS DOMAIN\" would\n> be expected, with the appropriate sub-specification, similar to \"CREATE\n> TYPE typename AS RANGE\".\n\nWell, that point seems entirely invented. CREATE DOMAIN is in the\nSQL standard:\n\n\t<domain definition> ::=\n\t CREATE DOMAIN <domain name> [ AS ] <predefined type>\n\t [ <default clause> ]\n\t [ <domain constraint>... ]\n\t [ <collate clause> ]\n\nWhile SQL does also have a CREATE TYPE command, domains are not\namong the kinds of type it can make. So that separation is\nvery much per spec.\n\n\nI don't personally find the doc changes proposed here to be a good idea.\n001 seems to add a lot of verbosity and not much else. 002 invents terms\nused nowhere else in our docs, which seems more confusing than anything\nelse. It is very badly in need of copy-editing, as well.\n\nAlso, I think the phrase you are looking for might be \"type category\".\nUsing \"type definition\" to mean that seems completely wrong. Deciding\nthat capitalized Type means something special is something I might expect\nto find in one of the more abstruse philosophers, but it's not a great\nidea in the Postgres manual ... especially when you then use different\nterminology elsewhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 16 Sep 2020 19:42:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Wed, Sep 16, 2020 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > My main point here is that writing \"CREATE TYPE typename AS DOMAIN\" would\n> > be expected, with the appropriate sub-specification, similar to \"CREATE\n> > TYPE typename AS RANGE\".\n>\n> Well, that point seems entirely invented. CREATE DOMAIN is in the\n> SQL standard:\n>\n> <domain definition> ::=\n> CREATE DOMAIN <domain name> [ AS ] <predefined type>\n> [ <default clause> ]\n> [ <domain constraint>... ]\n> [ <collate clause> ]\n>\n> While SQL does also have a CREATE TYPE command, domains are not\n> among the kinds of type it can make. So that separation is\n> very much per spec.\n>\n>\n> I don't personally find the doc changes proposed here to be a good idea.\n> 001 seems to add a lot of verbosity and not much else.\n\n\nThe intent is to add accuracy, which means verbosity given the non-obvious\nchoice made in the current implementation.\n\n\n> 002 invents terms\n> used nowhere else in our docs, which seems more confusing than anything\n> else.\n\n\nFair point - was hoping it would be discussion starter.\n\n It is very badly in need of copy-editing, as well.\n>\n\nI'll look at it with fresh eyes...\n\nAlso, I think the phrase you are looking for might be \"type category\".\n>\n\nActually what I want is \"Type type (typtype)\" according to pg_type but that\nseemed like an implementation detail that would be undesirable to use here\nso I tried to give it a different name. Type category (typcategory)\nalready has a meaning.\n\nUsing \"type definition\" to mean that seems completely wrong. Deciding\n> that capitalized Type means something special is something I might expect\n> to find in one of the more abstruse philosophers, but it's not a great\n> idea in the Postgres manual ... especially when you then use different\n> terminology elsewhere.\n>\n\nI very well may have been inconsistent but coupled with the above point\n\"type of the Type\" seems easier to follow compared to \"type of the type\" if\nI were to change \"type definition\" to \"type of the Type\".\n\nDavid J.\n\nOn Wed, Sep 16, 2020 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> My main point here is that writing \"CREATE TYPE typename AS DOMAIN\" would\n> be expected, with the appropriate sub-specification, similar to \"CREATE\n> TYPE typename AS RANGE\".\n\nWell, that point seems entirely invented. CREATE DOMAIN is in the\nSQL standard:\n\n <domain definition> ::=\n CREATE DOMAIN <domain name> [ AS ] <predefined type>\n [ <default clause> ]\n [ <domain constraint>... ]\n [ <collate clause> ]\n\nWhile SQL does also have a CREATE TYPE command, domains are not\namong the kinds of type it can make. So that separation is\nvery much per spec.\n\n\nI don't personally find the doc changes proposed here to be a good idea.\n001 seems to add a lot of verbosity and not much else.The intent is to add accuracy, which means verbosity given the non-obvious choice made in the current implementation. 002 invents terms\nused nowhere else in our docs, which seems more confusing than anything\nelse. Fair point - was hoping it would be discussion starter. It is very badly in need of copy-editing, as well.I'll look at it with fresh eyes...\nAlso, I think the phrase you are looking for might be \"type category\".Actually what I want is \"Type type (typtype)\" according to pg_type but that seemed like an implementation detail that would be undesirable to use here so I tried to give it a different name. Type category (typcategory) already has a meaning.\nUsing \"type definition\" to mean that seems completely wrong. Deciding\nthat capitalized Type means something special is something I might expect\nto find in one of the more abstruse philosophers, but it's not a great\nidea in the Postgres manual ... especially when you then use different\nterminology elsewhere.I very well may have been inconsistent but coupled with the above point \"type of the Type\" seems easier to follow compared to \"type of the type\" if I were to change \"type definition\" to \"type of the Type\".David J.",
"msg_date": "Wed, 16 Sep 2020 17:01:33 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Wed, Sep 16, 2020 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > My main point here is that writing \"CREATE TYPE typename AS DOMAIN\" would\n> > be expected, with the appropriate sub-specification, similar to \"CREATE\n> > TYPE typename AS RANGE\".\n>\n> Well, that point seems entirely invented. CREATE DOMAIN is in the\n> SQL standard:\n>\n\nAnd I'm writing for the user who sees that both \"CREATE DOMAIN\" and \"CREATE\nTYPE AS RANGE\" exist, and that there is no \"CREATE RANGE\", and wonders why\nif domains are simply a variant of a type, like ranges are, why doesn't\nCREATE TYPE just create those as well - or, rather, are there any material\ndifferences. I choose to include an observation that, no, they are not\nmaterially different in terms of being abstract types.\n\nIt struck me as odd that it wasn't just CREATE TYPE AS DOMAIN and so in my\npatch I thought to comment upon the oddity - and in doing so emphasize that\nthe DROP behavior for DOMAINS is no different than the types created by the\nCREATE TYPE command.\n\nDavid J.\n\nOn Wed, Sep 16, 2020 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> My main point here is that writing \"CREATE TYPE typename AS DOMAIN\" would\n> be expected, with the appropriate sub-specification, similar to \"CREATE\n> TYPE typename AS RANGE\".\n\nWell, that point seems entirely invented. CREATE DOMAIN is in the\nSQL standard:And I'm writing for the user who sees that both \"CREATE DOMAIN\" and \"CREATE TYPE AS RANGE\" exist, and that there is no \"CREATE RANGE\", and wonders why if domains are simply a variant of a type, like ranges are, why doesn't CREATE TYPE just create those as well - or, rather, are there any material differences. I choose to include an observation that, no, they are not materially different in terms of being abstract types.It struck me as odd that it wasn't just CREATE TYPE AS DOMAIN and so in my patch I thought to comment upon the oddity - and in doing so emphasize that the DROP behavior for DOMAINS is no different than the types created by the CREATE TYPE command.David J.",
"msg_date": "Wed, 16 Sep 2020 17:12:29 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Sep 15, 2020 at 3:48 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hi!\n>\n> I've skimmed through the thread and checked the patchset. Everything\n> looks good, except one paragraph, which doesn't look completely clear.\n>\n> + <para>\n> + This emulates the functionality provided by\n> + <xref linkend=\"sql-createtype\"/> but sets the created object's\n> + <glossterm linkend=\"glossary-type-definition\">type\n> definition</glossterm>\n> + to domain.\n> + </para>\n>\n> As I get it states that CREATE DOMAIN somehow \"emulates\" CREATE TYPE.\n> Could you please, rephrase it? It looks confusing to me yet.\n>\n\nv5 attached, looking at this fresh and with some comments to consider.\n\nI ended up just combining both patches into one.\n\nI did away with the glossary changes altogether, and the invention of the\nnew term. I ended up limiting \"type's type\" to just domain usage but did a\ncouple of a additional tweaks that tried to treat domains as not being\nactual types even though, at least in PostgreSQL, they are (at least as far\nas DROP TYPE is concerned - and since I don't have any understanding of the\nSQL Standard's decision to separate out create domain and create type I'll\njust stick to the implementation in front of me.\n\nDavid J.",
"msg_date": "Tue, 29 Sep 2020 19:00:46 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "st 30. 9. 2020 v 4:01 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Tue, Sep 15, 2020 at 3:48 PM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n>\n>> Hi!\n>>\n>> I've skimmed through the thread and checked the patchset. Everything\n>> looks good, except one paragraph, which doesn't look completely clear.\n>>\n>> + <para>\n>> + This emulates the functionality provided by\n>> + <xref linkend=\"sql-createtype\"/> but sets the created object's\n>> + <glossterm linkend=\"glossary-type-definition\">type\n>> definition</glossterm>\n>> + to domain.\n>> + </para>\n>>\n>> As I get it states that CREATE DOMAIN somehow \"emulates\" CREATE TYPE.\n>> Could you please, rephrase it? It looks confusing to me yet.\n>>\n>\n> v5 attached, looking at this fresh and with some comments to consider.\n>\n> I ended up just combining both patches into one.\n>\n> I did away with the glossary changes altogether, and the invention of the\n> new term. I ended up limiting \"type's type\" to just domain usage but did a\n> couple of a additional tweaks that tried to treat domains as not being\n> actual types even though, at least in PostgreSQL, they are (at least as far\n> as DROP TYPE is concerned - and since I don't have any understanding of the\n> SQL Standard's decision to separate out create domain and create type I'll\n> just stick to the implementation in front of me.\n>\n\n+1\n\nPavel\n\n\n> David J.\n>\n\nst 30. 9. 2020 v 4:01 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Tue, Sep 15, 2020 at 3:48 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:Hi!\n\nI've skimmed through the thread and checked the patchset. Everything\nlooks good, except one paragraph, which doesn't look completely clear.\n\n+ <para>\n+ This emulates the functionality provided by\n+ <xref linkend=\"sql-createtype\"/> but sets the created object's\n+ <glossterm linkend=\"glossary-type-definition\">type definition</glossterm>\n+ to domain.\n+ </para>\n\nAs I get it states that CREATE DOMAIN somehow \"emulates\" CREATE TYPE.\nCould you please, rephrase it? It looks confusing to me yet.v5 attached, looking at this fresh and with some comments to consider.I ended up just combining both patches into one. I did away with the glossary changes altogether, and the invention of the new term. I ended up limiting \"type's type\" to just domain usage but did a couple of a additional tweaks that tried to treat domains as not being actual types even though, at least in PostgreSQL, they are (at least as far as DROP TYPE is concerned - and since I don't have any understanding of the SQL Standard's decision to separate out create domain and create type I'll just stick to the implementation in front of me.+1PavelDavid J.",
"msg_date": "Wed, 30 Sep 2020 06:20:25 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On 30.09.2020 05:00, David G. Johnston wrote:\n> On Tue, Sep 15, 2020 at 3:48 PM Alexander Korotkov \n> <aekorotkov@gmail.com <mailto:aekorotkov@gmail.com>> wrote:\n>\n> Hi!\n>\n> I've skimmed through the thread and checked the patchset. Everything\n> looks good, except one paragraph, which doesn't look completely clear.\n>\n> + <para>\n> + This emulates the functionality provided by\n> + <xref linkend=\"sql-createtype\"/> but sets the created object's\n> + <glossterm linkend=\"glossary-type-definition\">type\n> definition</glossterm>\n> + to domain.\n> + </para>\n>\n> As I get it states that CREATE DOMAIN somehow \"emulates\" CREATE TYPE.\n> Could you please, rephrase it? It looks confusing to me yet.\n>\n>\n> v5 attached, looking at this fresh and with some comments to consider.\n>\n> I ended up just combining both patches into one.\n>\n> I did away with the glossary changes altogether, and the invention of \n> the new term. I ended up limiting \"type's type\" to just domain usage \n> but did a couple of a additional tweaks that tried to treat domains as \n> not being actual types even though, at least in PostgreSQL, they are \n> (at least as far as DROP TYPE is concerned - and since I don't have \n> any understanding of the SQL Standard's decision to separate out \n> create domain and create type I'll just stick to the implementation in \n> front of me.\n>\n> David J.\n\nReminder from a CF manager, as this thread was inactive for a while.\nAlexander, I see you signed up as a committer for this entry. Are you \ngoing to continue this work?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 30.09.2020 05:00, David G. Johnston\n wrote:\n\n\n\n\n\nOn Tue, Sep\n 15, 2020 at 3:48 PM Alexander Korotkov <aekorotkov@gmail.com>\n wrote:\n\n\n\nHi!\n\n I've skimmed through the thread and checked the patchset. \n Everything\n looks good, except one paragraph, which doesn't look\n completely clear.\n\n + <para>\n + This emulates the functionality provided by\n + <xref linkend=\"sql-createtype\"/> but sets the\n created object's\n + <glossterm linkend=\"glossary-type-definition\">type\n definition</glossterm>\n + to domain.\n + </para>\n\n As I get it states that CREATE DOMAIN somehow \"emulates\"\n CREATE TYPE.\n Could you please, rephrase it? It looks confusing to me\n yet.\n\n\n\nv5 attached,\n looking at this fresh and with some comments to consider.\n\n\nI ended up\n just combining both patches into one. \n \n\nI did away\n with the glossary changes altogether, and the invention of\n the new term. I ended up limiting \"type's type\" to just\n domain usage but did a couple of a additional tweaks that\n tried to treat domains as not being actual types even\n though, at least in PostgreSQL, they are (at least as far as\n DROP TYPE is concerned - and since I don't have any\n understanding of the SQL Standard's decision to separate out\n create domain and create type I'll just stick to the\n implementation in front of me.\n\n\nDavid J.\n\n\n\nReminder from a CF manager, as this thread was inactive for a\n while.\n Alexander, I see you signed up as a committer for this entry. Are\n you going to continue this work?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 23 Nov 2020 23:31:20 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "Hi David,\n\nOn 11/23/20 3:31 PM, Anastasia Lubennikova wrote:\n> On 30.09.2020 05:00, David G. Johnston wrote:\n>>\n>> v5 attached, looking at this fresh and with some comments to consider.\n>>\n>> I ended up just combining both patches into one.\n>>\n>> I did away with the glossary changes altogether, and the invention of \n>> the new term. I ended up limiting \"type's type\" to just domain usage \n>> but did a couple of a additional tweaks that tried to treat domains as \n>> not being actual types even though, at least in PostgreSQL, they are \n>> (at least as far as DROP TYPE is concerned - and since I don't have \n>> any understanding of the SQL Standard's decision to separate out \n>> create domain and create type I'll just stick to the implementation in \n>> front of me.\n> \n> Reminder from a CF manager, as this thread was inactive for a while.\n> Alexander, I see you signed up as a committer for this entry. Are you \n> going to continue this work?\n\nThis patch was marked Ready for Committer on July 14 but received a \nsignificant update on August 30. So, I have marked it Needs Review.\n\nFurther, I think we should close this entry at the end of the CF if it \ndoes not attract committer interest. Tom is not in favor of the patch \nand it appears Alexander decided not to commit it.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 9 Mar 2021 08:40:46 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tuesday, March 9, 2021, David Steele <david@pgmasters.net> wrote:\n\n>\n> Further, I think we should close this entry at the end of the CF if it\n> does not attract committer interest. Tom is not in favor of the patch and\n> it appears Alexander decided not to commit it.\n>\n\nPavel re-reviewed it and was fine with ready-to-commit so that status seems\nfine.\n\nFrankly, I am hoping for a bit more constructive feedback and even\ncollaboration from a committer, specifically Tom, on this one given the\noutstanding user complaints received on the topic, our disagreement\nregarding fixing it (which motivates the patch to better document and add\ntests), and professional courtesy given to a fellow consistent community\ncontributor.\n\nSo, no, making it just go away because one of the dozens of committers\ncan’t make time to try and make it work doesn’t sit well with me. If a\ncommitter wants to actively reject the patch with an explanation then so be\nit.\n\nDavid J.\n\nOn Tuesday, March 9, 2021, David Steele <david@pgmasters.net> wrote:\nFurther, I think we should close this entry at the end of the CF if it does not attract committer interest. Tom is not in favor of the patch and it appears Alexander decided not to commit it.\nPavel re-reviewed it and was fine with ready-to-commit so that status seems fine.Frankly, I am hoping for a bit more constructive feedback and even collaboration from a committer, specifically Tom, on this one given the outstanding user complaints received on the topic, our disagreement regarding fixing it (which motivates the patch to better document and add tests), and professional courtesy given to a fellow consistent community contributor.So, no, making it just go away because one of the dozens of committers can’t make time to try and make it work doesn’t sit well with me. If a committer wants to actively reject the patch with an explanation then so be it.David J.",
"msg_date": "Tue, 9 Mar 2021 08:08:58 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On 3/9/21 10:08 AM, David G. Johnston wrote:\n> \n> On Tuesday, March 9, 2021, David Steele <david@pgmasters.net \n> <mailto:david@pgmasters.net>> wrote:\n> \n> Further, I think we should close this entry at the end of the CF if\n> it does not attract committer interest. Tom is not in favor of the\n> patch and it appears Alexander decided not to commit it.\n> \n> Pavel re-reviewed it and was fine with ready-to-commit so that status \n> seems fine.\n\nAh yes, that was my mistake.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 9 Mar 2021 11:00:59 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 9:01 PM David Steele <david@pgmasters.net> wrote:\n\n> On 3/9/21 10:08 AM, David G. Johnston wrote:\n> >\n> > On Tuesday, March 9, 2021, David Steele <david@pgmasters.net\n> > <mailto:david@pgmasters.net>> wrote:\n> >\n> > Further, I think we should close this entry at the end of the CF if\n> > it does not attract committer interest. Tom is not in favor of the\n> > patch and it appears Alexander decided not to commit it.\n> >\n> > Pavel re-reviewed it and was fine with ready-to-commit so that status\n> > seems fine.\n>\n> Ah yes, that was my mistake.\n>\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n>\n>\n>\nThe status of the patch is \"Need Review\" which was previously \"Ready for\nCommitter ''. After @David G\nand @David Steele <david@pgmasters.net> comments, it's not clear whether it\nshould be \"Read for commit\" or \"Need Review\".\n\n-- \nIbrar Ahmed\n\nOn Tue, Mar 9, 2021 at 9:01 PM David Steele <david@pgmasters.net> wrote:On 3/9/21 10:08 AM, David G. Johnston wrote:\n> \n> On Tuesday, March 9, 2021, David Steele <david@pgmasters.net \n> <mailto:david@pgmasters.net>> wrote:\n> \n> Further, I think we should close this entry at the end of the CF if\n> it does not attract committer interest. Tom is not in favor of the\n> patch and it appears Alexander decided not to commit it.\n> \n> Pavel re-reviewed it and was fine with ready-to-commit so that status \n> seems fine.\n\nAh yes, that was my mistake.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\nThe status of the patch is \"Need Review\" which was previously \"Ready for Committer ''. After @David Gand @David Steele comments, it's not clear whether it should be \"Read for commit\" or \"Need Review\". -- Ibrar Ahmed",
"msg_date": "Tue, 13 Jul 2021 15:30:22 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 3:30 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Tue, Mar 9, 2021 at 9:01 PM David Steele <david@pgmasters.net> wrote:\n>\n>> On 3/9/21 10:08 AM, David G. Johnston wrote:\n>> >\n>> > On Tuesday, March 9, 2021, David Steele <david@pgmasters.net\n>> > <mailto:david@pgmasters.net>> wrote:\n>> >\n>> > Further, I think we should close this entry at the end of the CF if\n>> > it does not attract committer interest. Tom is not in favor of the\n>> > patch and it appears Alexander decided not to commit it.\n>> >\n>> > Pavel re-reviewed it and was fine with ready-to-commit so that status\n>> > seems fine.\n>>\n>> Ah yes, that was my mistake.\n>>\n>> Regards,\n>> --\n>> -David\n>> david@pgmasters.net\n>>\n>>\n>>\n> The status of the patch is \"Need Review\" which was previously \"Ready for\n> Committer ''. After @David G\n> and @David Steele <david@pgmasters.net> comments, it's not clear whether\n> it should be \"Read for commit\" or \"Need Review\".\n>\n>\nI changed it to Ready to Commit based on the same logic as my reply to\nDavid quoted above.\n\nDavid J.\n\nOn Tue, Jul 13, 2021 at 3:30 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Tue, Mar 9, 2021 at 9:01 PM David Steele <david@pgmasters.net> wrote:On 3/9/21 10:08 AM, David G. Johnston wrote:\n> \n> On Tuesday, March 9, 2021, David Steele <david@pgmasters.net \n> <mailto:david@pgmasters.net>> wrote:\n> \n> Further, I think we should close this entry at the end of the CF if\n> it does not attract committer interest. Tom is not in favor of the\n> patch and it appears Alexander decided not to commit it.\n> \n> Pavel re-reviewed it and was fine with ready-to-commit so that status \n> seems fine.\n\nAh yes, that was my mistake.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\nThe status of the patch is \"Need Review\" which was previously \"Ready for Committer ''. After @David Gand @David Steele comments, it's not clear whether it should be \"Read for commit\" or \"Need Review\". I changed it to Ready to Commit based on the same logic as my reply to David quoted above.David J.",
"msg_date": "Tue, 13 Jul 2021 08:37:49 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 10:09 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Frankly, I am hoping for a bit more constructive feedback and even collaboration from a committer, specifically Tom, on this one given the outstanding user complaints received on the topic, our disagreement regarding fixing it (which motivates the patch to better document and add tests), and professional courtesy given to a fellow consistent community contributor.\n>\n> So, no, making it just go away because one of the dozens of committers can’t make time to try and make it work doesn’t sit well with me. If a committer wants to actively reject the patch with an explanation then so be it.\n\nI have reviewed this patch and my opinion is that we should reject it,\nbecause in my opinion, the documentation changes are not improvements,\nand there is no really clear need for the regression test additions.\nAccording to my view of the situation, there are three kinds of\nchanges in this patch. The first set of hunks make minor wording\nadjustments. Typical is this:\n\n <command>CREATE DOMAIN</command> creates a new domain. A domain is\n- essentially a data type with optional constraints (restrictions on\n+ a data type with optional constraints (restrictions on\n the allowed set of values).\n\nSo, the only change here is deleting the word \"essentially.\" Now, it's\npossible that if someone different had written the original text, they\nmight have chosen to leave that word out. Personally, I would have\nchosen to include it, but it's a judgement call. However, I find it\nextremely difficult to imagine that anybody will be confused because\nof the presence of that word there.\n\n- The domain name must be unique among the types and domains existing\n- in its schema.\n+ The domain name must be unique among all types (not just domains)\n+ existing in its schema.\n\nSimilarly here. It is arguable which way the text reads better, but\nthe stated purpose of the patch is to make the behavior more clear,\nand I cannot imagine someone reading the existing text and getting\nconfused, and then reading the patched text and being not confused.\n\n- Do not throw an error if the domain does not exist. A notice is issued\n- in this case.\n+ This parameter instructs <productname>PostgreSQL</productname> to search\n+ for the first instance of any type with the provided name.\n+ If no type is found a notice is issued and the command ends.\n+ If a type is found then one of two things will happen:\n+ if the type is a domain it is dropped, otherwise the command fails.\n\nThis is the second kind of hunk that is present in this patch. There\nare a whole bunch that are very similar, adjusting the documentation\nfor various object types. I appreciate that it does confuse users\nsometimes that a DROP IF NOT EXISTS command can still fail for some\nother reason, so maybe we should try to clarify that in some way, but\nI find this explanation to be too complex and technical to be helpful.\nIf we feel it's necessary to be more clear here, I'd suggest keeping\nthe existing text and adding a sentence like: \"Note that the command\ncan still fail for other reasons; for example, it is an error if a\ntype with the provided name exists but is not a domain.\"\n\nI feel that it's unnecessary to try to clarify what the behavior of\nthe command is relative to search_path, but if it were agreed that we\nneed to do so, I still would not like this way of doing it. First, you\nnever actually use the term \"search_path\". I expect a lot of people\nwould be confused by the reference to searching \"for the first\ninstance\" because they won't know what is being searched. Second, this\nentire bit of text is inside the description of \"IF NOT EXISTS\" but\nthe behavior in question is mostly stuff that applies whether or not\n\"IF NOT EXISTS\" is specified. Moreover, it's not only not specific to\nIF NOT EXISTS, it's not even specific to this particular command. The\nbehavior of looking through the search_path for the first instance of\nsome object type is one that we use practically everywhere in all\nkinds of situations. I think we are going to go crazy if we try to\nre-explain that in every place where it might be relevant.\n\nThe final portion of the patch adds new regression tests. I'm not\ngoing to say that this is completely without merit, because it can be\nuseful to know if some patch changes the behavior, but I guess I don't\nreally see a whole lot of value in it, either. It's easy to end up\nwith a ton of tests that take up a little bit of time every time\nsomeone runs 'make check' but only ever find behavior changes that the\ndeveloper was intentionally trying to make. I think it's possible that\nthese changes would end up falling into that category. I wouldn't\ncomplaining about them getting committed, or about committing them\nmyself, if there were a chorus of people convinced that they were\nworth having, but there isn't, and I don't find them valuable enough\nmyself to justify a commit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Aug 2021 15:04:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 12:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Mar 9, 2021 at 10:09 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > Frankly, I am hoping for a bit more constructive feedback and even\n> collaboration from a committer, specifically Tom, on this one given the\n> outstanding user complaints received on the topic, our disagreement\n> regarding fixing it (which motivates the patch to better document and add\n> tests), and professional courtesy given to a fellow consistent community\n> contributor.\n> >\n> > So, no, making it just go away because one of the dozens of committers\n> can’t make time to try and make it work doesn’t sit well with me. If a\n> committer wants to actively reject the patch with an explanation then so be\n> it.\n>\n> I have reviewed this patch and my opinion is that we should reject it,\n>\n\nThank you for the feedback.\n\n> So, the only change here is deleting the word \"essentially.\"\n\n\nI do tend to find this wishy-washy language to be more annoying than the\ncommunity at large.\n\n\n> I'd suggest keeping\n> the existing text and adding a sentence like: \"Note that the command\n> can still fail for other reasons; for example, it is an error if a\n> type with the provided name exists but is not a domain.\"\n>\n\nI would at least like to see this added in response to the various bug\nreports that found the shared namespace among types, and the fact that it\ncauses an error, to be a surprise.\n\n> The final portion of the patch adds new regression tests. I'm not\n> going to say that this is completely without merit, because it can be\n> useful to know if some patch changes the behavior, but I guess I don't\n> really see a whole lot of value in it, either.\n>\n\nI'd say the Bug # 16492 tests warrant keeping independent of the opinion\nthat demonstrating the complicated interplay between 10+ SQL commands isn't\nworth the test suite time. I'd say that probably half of the tests are\ndemonstrating non-intuitive behavior from my perspective. The bug test\nnoted above plus the one the demonstration that a table in the non-first\nschema in a search_path will not prevent a create <type> command from\nsucceeding but will cause a DROP <type non-table> IF EXISTS to error out.\nDoes it need to test all 5 types, probably not, but it should at least test\nDROP VIEW IF EXISTS test_rel_exists.\n\nWhat about the inherent confusion that having both DROP DOMAIN when DROP\nTYPE will also drop domains? The doc change for that doesn't really fit\ninto your buckets. It would include:\n\ndrop_domain.sgml\n+ This duplicates the functionality provided by\n+ <xref linkend=\"sql-droptype\"/> but restricts\n+ the type's type to domain.\n\ndrop_type.sgml\n+ This includes domains, though they can be removed specifically\n+ by using the <xref linkend=\"sql-dropdomain\"/> command.\n\nAdding sql-droptype to \"See Also\" on all the domain related command pages\nas well.\n\nAfter looking at this again I will say I do agree that the procedural\nnature of the doc changes for the main issue were probably overkill and a\n\"oh-by-the-way\" note as to the fact that we ERROR on a namespace conflict\nwould address that concern in a user-facing way adequately. Looking back\nand what I went through to put the test script together I don't regret\ndoing the work and feel that someone like myself would benefit from its\nexistence as a whole. It's more useful than a README that would\ncommunicate the same information.\n\nDavid J.\n\nOn Tue, Aug 10, 2021 at 12:04 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Mar 9, 2021 at 10:09 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Frankly, I am hoping for a bit more constructive feedback and even collaboration from a committer, specifically Tom, on this one given the outstanding user complaints received on the topic, our disagreement regarding fixing it (which motivates the patch to better document and add tests), and professional courtesy given to a fellow consistent community contributor.\n>\n> So, no, making it just go away because one of the dozens of committers can’t make time to try and make it work doesn’t sit well with me. If a committer wants to actively reject the patch with an explanation then so be it.\n\nI have reviewed this patch and my opinion is that we should reject it,Thank you for the feedback.\nSo, the only change here is deleting the word \"essentially.\"I do tend to find this wishy-washy language to be more annoying than the community at large. I'd suggest keeping\nthe existing text and adding a sentence like: \"Note that the command\ncan still fail for other reasons; for example, it is an error if a\ntype with the provided name exists but is not a domain.\"I would at least like to see this added in response to the various bug reports that found the shared namespace among types, and the fact that it causes an error, to be a surprise.\nThe final portion of the patch adds new regression tests. I'm not\ngoing to say that this is completely without merit, because it can be\nuseful to know if some patch changes the behavior, but I guess I don't\nreally see a whole lot of value in it, either.I'd say the Bug # 16492 tests warrant keeping independent of the opinion that demonstrating the complicated interplay between 10+ SQL commands isn't worth the test suite time. I'd say that probably half of the tests are demonstrating non-intuitive behavior from my perspective. The bug test noted above plus the one the demonstration that a table in the non-first schema in a search_path will not prevent a create <type> command from succeeding but will cause a DROP <type non-table> IF EXISTS to error out. Does it need to test all 5 types, probably not, but it should at least test DROP VIEW IF EXISTS test_rel_exists.What about the inherent confusion that having both DROP DOMAIN when DROP TYPE will also drop domains? The doc change for that doesn't really fit into your buckets. It would include:drop_domain.sgml+ This duplicates the functionality provided by + <xref linkend=\"sql-droptype\"/> but restricts+ the type's type to domain.drop_type.sgml+ This includes domains, though they can be removed specifically+ by using the <xref linkend=\"sql-dropdomain\"/> command.Adding sql-droptype to \"See Also\" on all the domain related command pages as well.After looking at this again I will say I do agree that the procedural nature of the doc changes for the main issue were probably overkill and a \"oh-by-the-way\" note as to the fact that we ERROR on a namespace conflict would address that concern in a user-facing way adequately. Looking back and what I went through to put the test script together I don't regret doing the work and feel that someone like myself would benefit from its existence as a whole. It's more useful than a README that would communicate the same information.David J.",
"msg_date": "Tue, 10 Aug 2021 14:53:30 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 5:53 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>> The final portion of the patch adds new regression tests. I'm not\n>> going to say that this is completely without merit, because it can be\n>> useful to know if some patch changes the behavior, but I guess I don't\n>> really see a whole lot of value in it, either.\n> I'd say the Bug # 16492 tests warrant keeping independent of the opinion that demonstrating the complicated interplay between 10+ SQL commands isn't worth the test suite time. I'd say that probably half of the tests are demonstrating non-intuitive behavior from my perspective. The bug test noted above plus the one the demonstration that a table in the non-first schema in a search_path will not prevent a create <type> command from succeeding but will cause a DROP <type non-table> IF EXISTS to error out. Does it need to test all 5 types, probably not, but it should at least test DROP VIEW IF EXISTS test_rel_exists.\n\nIt's not the function of the regression tests to demonstrate behavior.\nNobody says \"I wonder how the system works, I guess I'll go look at\nthe regression tests,\" or at least very few people. The job of the\nregression test is to let us know when things get broken by future\nchanges to the source code, or in other words, to find regressions.\n\n> What about the inherent confusion that having both DROP DOMAIN when DROP TYPE will also drop domains? The doc change for that doesn't really fit into your buckets. It would include:\n>\n> drop_domain.sgml\n> + This duplicates the functionality provided by\n> + <xref linkend=\"sql-droptype\"/> but restricts\n> + the type's type to domain.\n>\n> drop_type.sgml\n> + This includes domains, though they can be removed specifically\n> + by using the <xref linkend=\"sql-dropdomain\"/> command.\n\nMaybe we could add a sentence to the end of the DROP DOMAIN synopsis,\nlike: Since a domain is a kind of type, they can alternatively be\ndropped using DROP TYPE.\n\nAnd for DROP TYPE, we could say something like: Since a domain is a\nkind of type, it is possible to drop a domain using this command\nrather than DROP DOMAIN.\n\nI think that's similar to your proposal, but I prefer it because I\nthink the duplicate-but-restrict language makes it sound kind of dumb\nthat DROP DOMAIN exists at all, and I don't think it's dumb that we\nhave DROP commands matching the CREATE commands that we also have.\n\n> Adding sql-droptype to \"See Also\" on all the domain related command pages as well.\n\nI probably wouldn't do this, but the notes above could include cross-links.\n\n> After looking at this again I will say I do agree that the procedural nature of the doc changes for the main issue were probably overkill and a \"oh-by-the-way\" note as to the fact that we ERROR on a namespace conflict would address that concern in a user-facing way adequately. Looking back and what I went through to put the test script together I don't regret doing the work and feel that someone like myself would benefit from its existence as a whole. It's more useful than a README that would communicate the same information.\n\nYeah. I tend to feel like this is the kind of thing where it's not\nlikely to be 100% obvious to users how it all works no matter what we\nput in the documentation, and that some of the behavior of a system\nhas to be learned just by trying out the system. Now, that doesn't\nmean that we should be immune to ideas about how to improve it, just\nthat someone can always have a question that isn't answered there.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Aug 2021 08:54:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 5:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Yeah. I tend to feel like this is the kind of thing where it's not\n> likely to be 100% obvious to users how it all works no matter what we\n> put in the documentation, and that some of the behavior of a system\n> has to be learned just by trying out the system.\n>\n\nI see where you are coming from and agree with a good portion of it, and\naccept that you are accurately representing reality for most of the rest,\nregardless of my feelings on that reality. I'm not invested in this enough\nto try and change mindsets. I have withdrawn the patch from the commitfest\nand do not plan on putting forth a replacement.\n\nThank you for your thorough review, I do appreciate it. It did reaffirm\nsome of the suspicions I had about the wording I had chosen at least.\n\nI will add that when I finished the patch I felt it was of more value to\nfuture developers than it would be to our end users. I never really\nworried that a patch could be written to fill in the missing pieces that\nprompted the various bug reports. I went to the extra effort because the\nunderlying interactions seemed complicated and I wanted to see in practice\nexactly how they behaved and what that meant for usability. I also still\ndisagree with our choice to emit an error on a namespace collision, and\ngenerally feel this area could use improvement. Thus I keep the tests\naround, which basically communicate that \"this is how things work and it is\nintentional\" and also are useful to have should future changes, however\nunlikely to materialize, be made. Sure, that isn't \"regression\" but in my\nunfamiliarity I didn't know of a better existing place to put them and\ndidn't think to figure out (or create) a better location, one that doesn't\nrun on every build.\n\nDavid J.\n\nOn Wed, Aug 11, 2021 at 5:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\nYeah. I tend to feel like this is the kind of thing where it's not\nlikely to be 100% obvious to users how it all works no matter what we\nput in the documentation, and that some of the behavior of a system\nhas to be learned just by trying out the system.I see where you are coming from and agree with a good portion of it, and accept that you are accurately representing reality for most of the rest, regardless of my feelings on that reality. I'm not invested in this enough to try and change mindsets. I have withdrawn the patch from the commitfest and do not plan on putting forth a replacement.Thank you for your thorough review, I do appreciate it. It did reaffirm some of the suspicions I had about the wording I had chosen at least.I will add that when I finished the patch I felt it was of more value to future developers than it would be to our end users. I never really worried that a patch could be written to fill in the missing pieces that prompted the various bug reports. I went to the extra effort because the underlying interactions seemed complicated and I wanted to see in practice exactly how they behaved and what that meant for usability. I also still disagree with our choice to emit an error on a namespace collision, and generally feel this area could use improvement. Thus I keep the tests around, which basically communicate that \"this is how things work and it is intentional\" and also are useful to have should future changes, however unlikely to materialize, be made. Sure, that isn't \"regression\" but in my unfamiliarity I didn't know of a better existing place to put them and didn't think to figure out (or create) a better location, one that doesn't run on every build.David J.",
"msg_date": "Wed, 11 Aug 2021 07:41:04 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP relation IF EXISTS Docs and Tests - Bug Fix"
}
] |
[
{
"msg_contents": "nbtdedup.c's \"single value strategy\" is used when we have a leaf page\nthat is full of duplicates of the same value. We infer that we will\nhave many more in the future, and cooperate with the \"single value\nstrategy\" used within nbtsplitloc.c. Deduplication should create 6\nlarge posting list tuples that will remain on the page after an\nanticipated page split, plus some remaining non-dedup'd regular tuples\nthat go on the new right page. So nbtdedup.c anticipates how an\neventual page split will need to work to keep space utilization high,\nbut not too high (though only in this specific \"single value strategy\"\ncase).\n\n(Note that the choice of 6 posting list tuples is arbitrary -- it\nseemed like limiting the size of posting list tuples to 1/3 of a page\n(the long established maximum) was a bit too aggressive, so the\nposting list tuple size limit was halved to 1/6 of the page.)\n\nSometimes, an append-only inserter of low cardinality data (just a few\ndistinct values) can leave some \"single value\" pages with perhaps 8 or\n9 tuples -- not the intended 7 (i.e. not the intended 6 tuples plus 1\nhigh key). This isn't really a problem. It can only happen when a\nplain tuple is \"wedged\" between two existing max-sized posting list\ntuples, where we cannot merge the plain tuple with either\nadjoining/neighboring posting list tuple (since that violates the 1/6\nof a page limit on posting list tuple size). This scenario is rare,\nand in any case the space utilization is almost exactly what it's\nsupposed to be in the end (the BTREE_SINGLEVAL_FILLFACTOR \"fill\nfactor\" that was added to Postgres 12 is still respected in the\npresence of deduplication in Postgres 13). This much is okay.\n\nHowever, I noticed that it's also possible to confuse \"single value\nstrategy\" in nbtdedup.c into thinking that it has already observed 6\n\"max sized\" tuples, when in fact it has not. This can lead to the\noccasional page that contains perhaps 50 or 70 tuples. This is rare\nenough that you have to really be paying attention to the structure of\nthe index to notice it -- most of my test case indexes that use\n\"single value strategy\" a lot aren't even affected. I'm not okay with\nthis, because it's not how nbtdedup.c is intended to work -- clearly\nnbtdedup.c gets confused as a consequence of the \"plain tuple gets\nwedged\" issue. I'm not okay with this.\n\nAttached patch nails down the behavior of single value strategy. This\nslightly improves the space utilization in a small minority of my test\ncase indexes, though not by enough to matter. For example, the TPC-H\nidx_customer_nationkey index goes from 1193 pages to 1182 pages. This\nis an utterly insignificant issue, but there is no reason to allow it.\nWhen I think about space utilization, I don't just look at the index\noverall -- I also expect a certain amount of consistency within\nrelated subsets of the index's key space.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 17 Jun 2020 15:48:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Minor issue with deduplication's single value strategy"
}
] |
[
{
"msg_contents": "Hi,\n\nNow that HashAgg can spill to disk, we see a few more details in\nEXPLAIN ANALYZE than we did previously, e.g. Peak Memory Usage, Disk\nUsage. However, the new code neglected to make EXPLAIN ANALYZE show\nthese new details for parallel workers.\n\nAdditionally, the new properties all were using\nExplainPropertyInteger() which meant that we got output like:\n\n QUERY PLAN\n---------------------------------------------------------------------\n HashAggregate (actual time=31.724..87.638 rows=1000 loops=1)\n Group Key: a\n Peak Memory Usage: 97 kB\n Disk Usage: 3928 kB\n HashAgg Batches: 798\n -> Seq Scan on ab (actual time=0.006..9.243 rows=100000 loops=1)\n\nWhere all the properties were on a line by themselves. This does not\nreally follow what other nodes are doing, e.g sort:\n\n QUERY PLAN\n---------------------------------------------------------------------------\n GroupAggregate (actual time=47.530..70.935 rows=1000 loops=1)\n Group Key: a\n -> Sort (actual time=47.500..59.344 rows=100000 loops=1)\n Sort Key: a\n Sort Method: external merge Disk: 2432kB\n -> Seq Scan on ab (actual time=0.004..8.476 rows=100000 loops=1)\n\nWhere we stick all the properties on a single line.\n\nThe attached patch fixes both the missing parallel worker information\nand puts the properties on a single line for format=text.\n\nI'd like to push this in the next few days.\n\nDavid",
"msg_date": "Thu, 18 Jun 2020 15:37:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 03:37:21PM +1200, David Rowley wrote:\n> Now that HashAgg can spill to disk, we see a few more details in\n> EXPLAIN ANALYZE than we did previously, e.g. Peak Memory Usage, Disk\n> Usage. However, the new code neglected to make EXPLAIN ANALYZE show\n> these new details for parallel workers.\n> \n> Additionally, the new properties all were using\n> ExplainPropertyInteger() which meant that we got output like:\n> \n> Group Key: a\n> Peak Memory Usage: 97 kB\n> Disk Usage: 3928 kB\n> HashAgg Batches: 798\n> \n> Where all the properties were on a line by themselves. This does not\n> really follow what other nodes are doing, e.g sort:\n> \n> -> Sort (actual time=47.500..59.344 rows=100000 loops=1)\n> Sort Key: a\n> Sort Method: external merge Disk: 2432kB\n> \n> Where we stick all the properties on a single line.\n\nNote that \"incremental sort\" is also new, and splits things up more than sort.\n\nSee in particular 6a918c3ac8a6b1d8b53cead6fcb7cbd84eee5750, which splits things\nup even more.\n\n -> Incremental Sort (actual rows=70 loops=1)\n Sort Key: t.a, t.b\n Presorted Key: t.a\n- Full-sort Groups: 1 Sort Method: quicksort Memory: avg=NNkB peak=NNkB Presorted Groups: 5 Sort Methods: top-N heapsort, quicksort Memory: avg=NNkB peak=NNkB\n+ Full-sort Groups: 1 Sort Method: quicksort Average Memory: NNkB Peak Memory: NNkB\n+ Pre-sorted Groups: 5 Sort Methods: top-N heapsort, quicksort Average Memory: NNkB Peak Memory: NNkB\n\nThat's not really a \"precedent\" and I don't think that necessarily invalidates\nyour change.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Jun 2020 08:45:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Fri, 19 Jun 2020 at 01:45, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Note that \"incremental sort\" is also new, and splits things up more than sort.\n>\n> See in particular 6a918c3ac8a6b1d8b53cead6fcb7cbd84eee5750, which splits things\n> up even more.\n>\n> -> Incremental Sort (actual rows=70 loops=1)\n> Sort Key: t.a, t.b\n> Presorted Key: t.a\n> - Full-sort Groups: 1 Sort Method: quicksort Memory: avg=NNkB peak=NNkB Presorted Groups: 5 Sort Methods: top-N heapsort, quicksort Memory: avg=NNkB peak=NNkB\n> + Full-sort Groups: 1 Sort Method: quicksort Average Memory: NNkB Peak Memory: NNkB\n> + Pre-sorted Groups: 5 Sort Methods: top-N heapsort, quicksort Average Memory: NNkB Peak Memory: NNkB\n>\n> That's not really a \"precedent\" and I don't think that necessarily invalidates\n> your change.\n\nI imagine you moved \"Per-sorted Groups\" to a new line due to the lines\nbecoming too long? Or was there something else special about that\nproperty to warrant putting it on a new line?\n\nIf it's due to the length of the line, then I don't think there are\nquite enough properties for HashAgg to warrant wrapping them to\nanother line.\n\nPerhaps there's some merit having something else decide when we should\nwrap to a new line. e.g once we've put 4 properties on a single line\nwith the text format. However, it seems like we're pretty inconsistent\nwith the normal form of properties. Some have multiple values per\nproperty, e.g:\n\nif (es->format == EXPLAIN_FORMAT_TEXT)\n{\nExplainIndentText(es);\nappendStringInfo(es->str, \"Sort Method: %s %s: %ldkB\\n\",\nsortMethod, spaceType, spaceUsed);\n}\nelse\n{\nExplainPropertyText(\"Sort Method\", sortMethod, es);\nExplainPropertyInteger(\"Sort Space Used\", \"kB\", spaceUsed, es);\nExplainPropertyText(\"Sort Space Type\", spaceType, es);\n}\n\nSo spaceType is a \"Sort Method\" in the text format, but it's \"Sort\nSpace Type\" in other formats. It might not be easy to remove all the\nspecial casing for the text format out of explain.c without changing\nthe output.\n\n\nAs for this patch, I don't think it's unreasonable to have the 3\npossible HashAgg properties on a single line Other people might\ndisagree, so here's an example of what the patch changes it to:\n\npostgres=# explain analyze select a,sum(b) from ab group by a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=175425.12..194985.79 rows=988 width=12) (actual\ntime=1551.920..5281.670 rows=1000 loops=1)\n Group Key: a\n Peak Memory Usage: 97 kB Disk Usage: 139760 kB HashAgg Batches: 832\n -> Seq Scan on ab (cost=0.00..72197.00 rows=5005000 width=8)\n(actual time=0.237..451.228 rows=5005000 loops=1)\n\nMaster currently does:\n\n QUERY PLAN\n---------------------------------------------------------------------\n HashAggregate (actual time=31.724..87.638 rows=1000 loops=1)\n Group Key: a\n Peak Memory Usage: 97 kB\n Disk Usage: 3928 kB\n HashAgg Batches: 798\n -> Seq Scan on ab (actual time=0.006..9.243 rows=100000 loops=1)\n\nDavid\n\n\n",
"msg_date": "Fri, 19 Jun 2020 14:02:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 02:02:29PM +1200, David Rowley wrote:\n> On Fri, 19 Jun 2020 at 01:45, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Note that \"incremental sort\" is also new, and splits things up more than sort.\n> >\n> > See in particular 6a918c3ac8a6b1d8b53cead6fcb7cbd84eee5750, which splits things\n> > up even more.\n> >\n> > -> Incremental Sort (actual rows=70 loops=1)\n> > Sort Key: t.a, t.b\n> > Presorted Key: t.a\n> > - Full-sort Groups: 1 Sort Method: quicksort Memory: avg=NNkB peak=NNkB Presorted Groups: 5 Sort Methods: top-N heapsort, quicksort Memory: avg=NNkB peak=NNkB\n> > + Full-sort Groups: 1 Sort Method: quicksort Average Memory: NNkB Peak Memory: NNkB\n> > + Pre-sorted Groups: 5 Sort Methods: top-N heapsort, quicksort Average Memory: NNkB Peak Memory: NNkB\n> >\n> > That's not really a \"precedent\" and I don't think that necessarily invalidates\n> > your change.\n> \n> I imagine you moved \"Per-sorted Groups\" to a new line due to the lines\n> becoming too long? Or was there something else special about that\n> property to warrant putting it on a new line?\n\nhttps://www.postgresql.org/message-id/20200419023625.GP26953@telsasoft.com\n|I still think Pre-sorted groups should be on a separate line, as in 0002.\n|In addition to looking better (to me), and being easier to read, another reason\n|is that there are essentially key=>values here, but the keys are repeated (Sort\n|Method, etc).\n\n> postgres=# explain analyze select a,sum(b) from ab group by a;\n> HashAggregate (cost=175425.12..194985.79 rows=988 width=12) (actual time=1551.920..5281.670 rows=1000 loops=1)\n> Group Key: a\n> Peak Memory Usage: 97 kB Disk Usage: 139760 kB HashAgg Batches: 832\n\nPlease be sure to use two spaces between each field !\n\nSee earlier discussions (and commits referenced by the Opened Items page).\nhttps://www.postgresql.org/message-id/20200402054120.GC14618@telsasoft.com\nhttps://www.postgresql.org/message-id/20200407042521.GH2228%40telsasoft.com\n\n+ appendStringInfoChar(es->str, ' ');\n+\n+ appendStringInfo(es->str, \"Peak Memory Usage: \" INT64_FORMAT \" kB\", memPeakKb);\n+\n+ if (aggstate->hash_batches_used > 0)\n+ appendStringInfo(es->str, \" Disk Usage: \" UINT64_FORMAT \" kB HashAgg Batches: %d\",\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Jun 2020 21:20:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Fri, 19 Jun 2020 at 14:20, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Please be sure to use two spaces between each field !\n>\n> See earlier discussions (and commits referenced by the Opened Items page).\n> https://www.postgresql.org/message-id/20200402054120.GC14618@telsasoft.com\n> https://www.postgresql.org/message-id/20200407042521.GH2228%40telsasoft.com\n\nThanks. I wasn't aware of that conversion.\n\nv2 attached.\n\nDavid",
"msg_date": "Fri, 19 Jun 2020 15:03:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Thu, 2020-06-18 at 15:37 +1200, David Rowley wrote:\n> The attached patch fixes both the missing parallel worker information\n> and puts the properties on a single line for format=text.\n\nThank you. \n\nI noticed some strange results in one case where one worker had a lot\nmore batches than another. After I investigated, I think everything is\nfine -- it seems that one worker was getting more tuples from the\nunderlying parallel scan. So everything looks good to me.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 18 Jun 2020 20:28:30 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 03:03:41PM +1200, David Rowley wrote:\n> On Fri, 19 Jun 2020 at 14:20, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Please be sure to use two spaces between each field !\n> >\n> > See earlier discussions (and commits referenced by the Opened Items page).\n> > https://www.postgresql.org/message-id/20200402054120.GC14618@telsasoft.com\n> > https://www.postgresql.org/message-id/20200407042521.GH2228%40telsasoft.com\n> \n> Thanks. I wasn't aware of that conversion.\n\nTo be clear, there wasn't a \"conversion\". There were (and are still) different\nformats (which everyone agrees isn't ideal) used by \"explain(BUFFERS)\" vs Sort\nand Hash. The new incr sort changed from output that looked like Buffers (one\nspace, and equals) to output that looked like Sort/Hashjoin (two spaces and\ncolons). And the new explain(WAL) originally used a hybrid (which, on my\nrequest, used two spaces), but it was changed (back) to use one space, for\nconsistency with explain(BUFFERS).\n\nSome minor nitpicks now that we've dealt with the important issue of\nwhitespace:\n\n+ bool gotone = false;\n\n=> Would you consider calling that \"found\" ?\n\nI couldn't put my finger on it at first why this felt so important to ask, but\nrealized that my head was tripping over a variable whose name starts with\n\"goto\", and spending 0.2 seconds trying to figure out what you might have meant\nby \"goto ne\".\n\n+ int n;\n+\n+ for (n = 0; n < aggstate->shared_info->num_workers; n++)\n\n=> Maybe use a C99 declaration ?\n\n+ if (hash_batches_used > 0)\n+ {\n+ ExplainPropertyInteger(\"Disk Usage\", \"kB\", hash_disk_used,\n+ es);\n+ ExplainPropertyInteger(\"HashAgg Batches\", NULL,\n+ hash_batches_used, es);\n+ }\n\n=> Shouldn't those *always* be shown in nontext format ? I realize that's not\na change new to your patch, and maybe should be a separate commit.\n\n+ size = offsetof(SharedAggInfo, sinstrument)\n+ + pcxt->nworkers * sizeof(AggregateInstrumentation);\n\n=> There's a couple places where I'd prefer to see \"+\" at the end of the\npreceding line (but YMMV).\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Jun 2020 23:06:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Fri, 19 Jun 2020 at 16:06, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Jun 19, 2020 at 03:03:41PM +1200, David Rowley wrote:\n> > On Fri, 19 Jun 2020 at 14:20, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Please be sure to use two spaces between each field !\n> > >\n> > > See earlier discussions (and commits referenced by the Opened Items page).\n> > > https://www.postgresql.org/message-id/20200402054120.GC14618@telsasoft.com\n> > > https://www.postgresql.org/message-id/20200407042521.GH2228%40telsasoft.com\n> >\n> > Thanks. I wasn't aware of that conversion.\n>\n> To be clear, there wasn't a \"conversion\".\n\nSorry, I meant to write \"conversation\".\n\n>\n> Some minor nitpicks now that we've dealt with the important issue of\n> whitespace:\n>\n> + bool gotone = false;\n>\n> => Would you consider calling that \"found\" ?\n\nI'll consider it. I didn't really invent the name. There's plenty of\nother places that use that name for the same thing. I think of\n\"found\" as more often used when we're looking for something and need\nto record if we found it or not. That's not really happening here. I\njust want to record if we've added a property yet.\n\n> + int n;\n> +\n> + for (n = 0; n < aggstate->shared_info->num_workers; n++)\n>\n> => Maybe use a C99 declaration ?\n\nMaybe. It'll certainly save a couple of lines.\n\n> + if (hash_batches_used > 0)\n> + {\n> + ExplainPropertyInteger(\"Disk Usage\", \"kB\", hash_disk_used,\n> + es);\n> + ExplainPropertyInteger(\"HashAgg Batches\", NULL,\n> + hash_batches_used, es);\n> + }\n>\n> => Shouldn't those *always* be shown in nontext format ? I realize that's not\n> a change new to your patch, and maybe should be a separate commit.\n\nPerhaps a separate commit. I don't want to overload the debate with\ntoo many things. I'd rather push forward with just the originally\nproposed change since nobody has shown any objection for it.\n\n> + size = offsetof(SharedAggInfo, sinstrument)\n> + + pcxt->nworkers * sizeof(AggregateInstrumentation);\n>\n> => There's a couple places where I'd prefer to see \"+\" at the end of the\n> preceding line (but YMMV).\n\nI pretty much just copied the whole of that code from nodeSort.c. I'm\nmore inclined to just keep it as similar to that as possible. However,\nif pgindent decides otherwise, then I'll go with that. I imagine it\nwon't move it though as that code has already been through indentation\na few times before.\n\nThanks for the review.\n\nDavid\n\n\n",
"msg_date": "Fri, 19 Jun 2020 16:21:18 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Fri, 19 Jun 2020 at 15:28, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2020-06-18 at 15:37 +1200, David Rowley wrote:\n> > The attached patch fixes both the missing parallel worker information\n> > and puts the properties on a single line for format=text.\n>\n> Thank you.\n>\n> I noticed some strange results in one case where one worker had a lot\n> more batches than another. After I investigated, I think everything is\n> fine -- it seems that one worker was getting more tuples from the\n> underlying parallel scan. So everything looks good to me.\n\nThanks for having a look at this. I just pushed it.\n\nDavid\n\n\n",
"msg_date": "Fri, 19 Jun 2020 17:27:13 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On 2020-Jun-19, David Rowley wrote:\n\n\n> > + size = offsetof(SharedAggInfo, sinstrument)\n> > + + pcxt->nworkers * sizeof(AggregateInstrumentation);\n> >\n> > => There's a couple places where I'd prefer to see \"+\" at the end of the\n> > preceding line (but YMMV).\n> \n> I pretty much just copied the whole of that code from nodeSort.c. I'm\n> more inclined to just keep it as similar to that as possible. However,\n> if pgindent decides otherwise, then I'll go with that. I imagine it\n> won't move it though as that code has already been through indentation\n> a few times before.\n\npgindent won't change your choice here. This seems an ideological\nissue; each committer has got their style. Some people prefer to put\noperators at the end of the line, others do it this way.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 15:14:45 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 02:02:29PM +1200, David Rowley wrote:\n> if (es->format == EXPLAIN_FORMAT_TEXT)\n> {\n> ExplainIndentText(es);\n> appendStringInfo(es->str, \"Sort Method: %s %s: %ldkB\\n\",\n...\n\n> As for this patch, I don't think it's unreasonable to have the 3\n> possible HashAgg properties on a single line Other people might\n> disagree, so here's an example of what the patch changes it to:\n> \n> postgres=# explain analyze select a,sum(b) from ab group by a;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=175425.12..194985.79 rows=988 width=12) (actual\n> time=1551.920..5281.670 rows=1000 loops=1)\n> Group Key: a\n> Peak Memory Usage: 97 kB Disk Usage: 139760 kB HashAgg Batches: 832\n> -> Seq Scan on ab (cost=0.00..72197.00 rows=5005000 width=8) (actual time=0.237..451.228 rows=5005000 loops=1)\n\nI've just noticed another inconsistency:\nFor \"Sort\", there's no space before \"kB\", but your patch (9bdb300d) uses a\nspace for text mode.\n\n+ appendStringInfo(es->str, \"Peak Memory Usage: \" INT64_FORMAT \" kB\",\n+ memPeakKb);\n+\n+ if (aggstate->hash_batches_used > 0)\n+ appendStringInfo(es->str, \" Disk Usage: \" UINT64_FORMAT \" kB HashAgg Batches: %d\",\n\n...\n\n+ appendStringInfo(es->str, \"Peak Memory Usage: \" INT64_FORMAT \" kB\",\n+ memPeakKb);\n+\n+ if (hash_batches_used > 0)\n+ appendStringInfo(es->str, \" Disk Usage: \" UINT64_FORMAT \" kB HashAgg Batches: %d\",\n\nHopefully there's the final whitespace inconsistency for this release.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 8 Jul 2020 11:30:21 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
},
{
"msg_contents": "On Thu, 9 Jul 2020 at 04:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I've just noticed another inconsistency:\n> For \"Sort\", there's no space before \"kB\", but your patch (9bdb300d) uses a\n> space for text mode.\n\nThanks for the report. I just pushed a fix for that.\n\nDavid\n\n\n",
"msg_date": "Thu, 9 Jul 2020 10:08:05 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing HashAgg EXPLAIN ANALYZE details for parallel plans"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI've attached a patch to add information during standby recovery conflicts.\nThe motive behind is to be able to get some information:\n\n * On the apply side\n * On the blocker(s) side\n\n_Motivation:_\n\nWhen a standby recovery conflict occurs it could be useful to get more \ninformation to be able to dive deep on the root cause and find a way to \navoid/mitigate new occurrences.\n\nAdding this information would make the investigations easier, it could \nhelp answering questions like:\n\n * On which LSN was the WAL apply blocked?\n * What was the purpose of the bocked WAL record?\n * On which relation (if any) was the blocked WAL record related to?\n * What was the blocker(s) doing?\n * When did the blocker(s) started their queries (if any)?\n * What was the blocker(s) waiting for? on which wait event?\n\n_Technical context and proposal:_\n\nThere is 2 points in this patch:\n\n * Add the information about the blocked WAL record. This is done in\n standby.c (ResolveRecoveryConflictWithVirtualXIDs,\n ResolveRecoveryConflictWithDatabase, StandbyTimeoutHandler)\n * Add the information about the blocker(s). This is done in postgres.c\n (RecoveryConflictInterrupt)\n\n_Outcome Example:_\n\n2020-06-15 06:48:23.774 UTC [6971] LOG: wal record apply is blocked by 2 connection(s), reason: User query might have needed to see row versions that must be removed.\n2020-06-15 06:48:23.774 UTC [6971] LOG: blocked wal record rmgr: Heap2, lsn: 0/038E2678, received at: 2020-06-15 06:48:23.774098+00, desc: CLEAN, relation: rel 1663/13586/16652 fork main blk 0\n2020-06-15 06:48:54.773 UTC [7088] LOG: about to interrupt pid: 7088, backend_type: client backend, state: active, wait_event_type: Timeout, wait_event: PgSleep, query_start: 2020-06-15 06:48:14.87672+00\n2020-06-15 06:48:54.773 UTC [7088] DETAIL: statement: select *, pg_sleep(120) from bdt;\n2020-06-15 06:48:54.773 UTC [7088] STATEMENT: select *, pg_sleep(120) from bdt;\n2020-06-15 06:48:54.773 UTC [7088] ERROR: canceling statement due to conflict with recovery\n2020-06-15 06:48:54.773 UTC [7088] DETAIL: User query might have needed to see row versions that must be removed.\n2020-06-15 06:48:54.773 UTC [7088] STATEMENT: select *, pg_sleep(120) from bdt;\n2020-06-15 06:48:54.778 UTC [7037] LOG: about to interrupt pid: 7037, backend_type: client backend, state: active, wait_event_type: Timeout, wait_event: PgSleep, query_start: 2020-06-15 06:48:13.008427+00\n2020-06-15 06:48:54.778 UTC [7037] DETAIL: statement: select *, pg_sleep(300) from bdt;\n2020-06-15 06:48:54.778 UTC [7037] STATEMENT: select *, pg_sleep(300) from bdt;\n2020-06-15 06:48:54.778 UTC [7037] ERROR: canceling statement due to conflict with recovery\n2020-06-15 06:48:54.778 UTC [7037] DETAIL: User query might have needed to see row versions that must be removed.\n2020-06-15 06:48:54.778 UTC [7037] STATEMENT: select *, pg_sleep(300) from bdt;\n\nIt takes care of the other conflicts reason too.\n\nSo, should a standby recovery conflict occurs, we could see:\n\n * information about the blocked WAL record apply\n * information about the blocker(s)\n\nI will add this patch to the next commitfest. I look forward to your \nfeedback about the idea and/or implementation.\n\nRegards,\n\nBertrand",
"msg_date": "Thu, 18 Jun 2020 09:28:13 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Thu, 18 Jun 2020 at 16:28, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi hackers,\n>\n> I've attached a patch to add information during standby recovery conflicts.\n> The motive behind is to be able to get some information:\n>\n> On the apply side\n> On the blocker(s) side\n>\n> Motivation:\n>\n> When a standby recovery conflict occurs it could be useful to get more information to be able to dive deep on the root cause and find a way to avoid/mitigate new occurrences.\n\nI think this is a good feature. Like log_lock_waits, it will help the\nusers to investigate recovery conflict issues.\n\n>\n> Adding this information would make the investigations easier, it could help answering questions like:\n>\n> On which LSN was the WAL apply blocked?\n> What was the purpose of the bocked WAL record?\n> On which relation (if any) was the blocked WAL record related to?\n> What was the blocker(s) doing?\n> When did the blocker(s) started their queries (if any)?\n> What was the blocker(s) waiting for? on which wait event?\n>\n> Technical context and proposal:\n>\n> There is 2 points in this patch:\n>\n> Add the information about the blocked WAL record. This is done in standby.c (ResolveRecoveryConflictWithVirtualXIDs, ResolveRecoveryConflictWithDatabase, StandbyTimeoutHandler)\n\nI think we already have the information about the WAL record being\napplied in errcontext.\n\nI wonder if we can show the recovery conflict information in the main\nLOG message, the blocker information in errdetail, and use errcontext\nwith regard to WAL record information. For example:\n\nLOG: process 500 waiting for recovery conflict on snapshot\nDETAIL: conflicting transition id: 100, 200, 300\nCONTEXT: WAL redo at 0/3001970 for Heap2/CLEAN: remxid 506\n\n> Outcome Example:\n>\n> 2020-06-15 06:48:23.774 UTC [6971] LOG: wal record apply is blocked by 2 connection(s), reason: User query might have needed to see row versions that must be removed.\n> 2020-06-15 06:48:23.774 UTC [6971] LOG: blocked wal record rmgr: Heap2, lsn: 0/038E2678, received at: 2020-06-15 06:48:23.774098+00, desc: CLEAN, relation: rel 1663/13586/16652 fork main blk 0\n> 2020-06-15 06:48:54.773 UTC [7088] LOG: about to interrupt pid: 7088, backend_type: client backend, state: active, wait_event_type: Timeout, wait_event: PgSleep, query_start: 2020-06-15 06:48:14.87672+00\n\nI'm concerned that the above information on the process who is about\nto be interrupted is very detailed but I'm not sure it will be helpful\nfor the users. If the blocker is waiting on something lock, the\ninformation should be logged by log_lock_waits. Also the canceled\nbackend will emit the ERROR log with the message \"canceling statement\ndue to conflict with recovery”, and its pid can be logged at the log\nprefix. The user can know who has been canceled by recovery conflicts\nand the query.\n\nProbably we need to consider having a time threshold (or boolean to\nturn on/off) to emit this information to the server logs. Otherwise,\nwe will end up writing every conflict information, making the log\ndirty needlessly.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 29 Jun 2020 17:55:06 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hey,\n\nOn 6/29/20 10:55 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Thu, 18 Jun 2020 at 16:28, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi hackers,\n>>\n>> I've attached a patch to add information during standby recovery conflicts.\n>> The motive behind is to be able to get some information:\n>>\n>> On the apply side\n>> On the blocker(s) side\n>>\n>> Motivation:\n>>\n>> When a standby recovery conflict occurs it could be useful to get more information to be able to dive deep on the root cause and find a way to avoid/mitigate new occurrences.\n> I think this is a good feature. Like log_lock_waits, it will help the\n> users to investigate recovery conflict issues.\nexactly, thanks for looking at it.\n>\n>> Adding this information would make the investigations easier, it could help answering questions like:\n>>\n>> On which LSN was the WAL apply blocked?\n>> What was the purpose of the bocked WAL record?\n>> On which relation (if any) was the blocked WAL record related to?\n>> What was the blocker(s) doing?\n>> When did the blocker(s) started their queries (if any)?\n>> What was the blocker(s) waiting for? on which wait event?\n>>\n>> Technical context and proposal:\n>>\n>> There is 2 points in this patch:\n>>\n>> Add the information about the blocked WAL record. This is done in standby.c (ResolveRecoveryConflictWithVirtualXIDs, ResolveRecoveryConflictWithDatabase, StandbyTimeoutHandler)\n> I think we already have the information about the WAL record being\n> applied in errcontext.\nright, but it won’t be displayed in case log_error_verbosity is set to \nterse.\nOr did you had in mind to try to avoid using the new \n“current_replayed_xlog” in xlog.c?\n\n>\n> I wonder if we can show the recovery conflict information in the main\n> LOG message, the blocker information in errdetail, and use errcontext\n> with regard to WAL record information. For example:\n>\n> LOG: process 500 waiting for recovery conflict on snapshot\n> DETAIL: conflicting transition id: 100, 200, 300\n> CONTEXT: WAL redo at 0/3001970 for Heap2/CLEAN: remxid 506\nNot sure at all if it would be possible to put all the information at \nthe same time.\nFor example in case of shared buffer pin conflict, the blocker is \ncurrently known “only” during the RecoveryConflictInterrupt call (so \nstill not known yet when we can “already” report the blocked LSN \ninformation).\nIt might also happen that the blocker(s) will never get interrupted (was \nnot blocking anymore before max_standby_streaming_delay has been \nreached): then it would not be possible to display all the information \nhere (aka when it is interrupted) while we still want to be warned that \nthe WAL replay has been blocked.\n>\n>> Outcome Example:\n>>\n>> 2020-06-15 06:48:23.774 UTC [6971] LOG: wal record apply is blocked by 2 connection(s), reason: User query might have needed to see row versions that must be removed.\n>> 2020-06-15 06:48:23.774 UTC [6971] LOG: blocked wal record rmgr: Heap2, lsn: 0/038E2678, received at: 2020-06-15 06:48:23.774098+00, desc: CLEAN, relation: rel 1663/13586/16652 fork main blk 0\n>> 2020-06-15 06:48:54.773 UTC [7088] LOG: about to interrupt pid: 7088, backend_type: client backend, state: active, wait_event_type: Timeout, wait_event: PgSleep, query_start: 2020-06-15 06:48:14.87672+00\n> I'm concerned that the above information on the process who is about\n> to be interrupted is very detailed but I'm not sure it will be helpful\n> for the users. If the blocker is waiting on something lock, the\n> information should be logged by log_lock_waits.\n\nThe blocker could also be in “idle in transaction”, say for example, on \nthe standby (with hot_standby_feedback set to off):\n\nstandby> begin;\nstandby> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\nstandby> select * from bdt;\n\non the master:\n\nmaster> update bdt set generate_series = 15;\nmaster> vacuum verbose bdt;\n\nwould produce:\n\n2020-07-01 09:18:55.256 UTC [32751] LOG: about to interrupt pid: 32751, \nbackend_type: client backend, state: idle in transaction, \nwait_event_type: Client, wait_event: ClientRead, query_start: 2020-07-01 \n09:18:17.390572+00\n2020-07-01 09:18:55.256 UTC [32751] DETAIL: statement: select * from bdt;\n\nI think those information are useful to have (might get rid of \nwait_event_type though: done in the new attached patch).\n\n> Also the canceled\n> backend will emit the ERROR log with the message \"canceling statement\n> due to conflict with recovery”, and its pid can be logged at the log\n> prefix. The user can know who has been canceled by recovery conflicts\n> and the query.\n\nRight, we can also get rid of the pid and the query in the new log \nmessage too (done in the new attached patch).\n\n> Probably we need to consider having a time threshold (or boolean to\n> turn on/off) to emit this information to the server logs. Otherwise,\n> we will end up writing every conflict information, making the log\n> dirty needlessly.\n\ngood point, I do agree (done in the new attached patch).\n\nBertrand\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 1 Jul 2020 14:52:26 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Wed, 1 Jul 2020 at 21:52, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hey,\n>\n> On 6/29/20 10:55 AM, Masahiko Sawada wrote:\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Thu, 18 Jun 2020 at 16:28, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hi hackers,\n> >>\n> >> I've attached a patch to add information during standby recovery conflicts.\n> >> The motive behind is to be able to get some information:\n> >>\n> >> On the apply side\n> >> On the blocker(s) side\n> >>\n> >> Motivation:\n> >>\n> >> When a standby recovery conflict occurs it could be useful to get more information to be able to dive deep on the root cause and find a way to avoid/mitigate new occurrences.\n> > I think this is a good feature. Like log_lock_waits, it will help the\n> > users to investigate recovery conflict issues.\n> exactly, thanks for looking at it.\n> >\n> >> Adding this information would make the investigations easier, it could help answering questions like:\n> >>\n> >> On which LSN was the WAL apply blocked?\n> >> What was the purpose of the bocked WAL record?\n> >> On which relation (if any) was the blocked WAL record related to?\n> >> What was the blocker(s) doing?\n> >> When did the blocker(s) started their queries (if any)?\n> >> What was the blocker(s) waiting for? on which wait event?\n> >>\n> >> Technical context and proposal:\n> >>\n> >> There is 2 points in this patch:\n> >>\n> >> Add the information about the blocked WAL record. This is done in standby.c (ResolveRecoveryConflictWithVirtualXIDs, ResolveRecoveryConflictWithDatabase, StandbyTimeoutHandler)\n> > I think we already have the information about the WAL record being\n> > applied in errcontext.\n> right, but it won’t be displayed in case log_error_verbosity is set to\n> terse.\n\nYes. I think in this case errcontext or errdetail is more appropriate\nfor this information in this case. Otherwise, we will end up emitting\nsame WAL record information twice in log_error_verbosity = verbose.\n\nFor instance, here is the server logs when I tested a recovery\nconflict on buffer pin but it looks redundant:\n\n2020-07-03 11:01:15.339 JST [60585] LOG: wal record apply is blocked\nby 1 connection(s), reason: User query might have needed to see row\nversions that must be removed.\n2020-07-03 11:01:15.339 JST [60585] CONTEXT: WAL redo at 0/30025E0\nfor Heap2/CLEAN: remxid 505\n2020-07-03 11:01:15.339 JST [60585] LOG: blocked wal record rmgr:\nHeap2, lsn: 0/030025E0, received at: 2020-07-03 11:01:15.338997+09,\ndesc: CLEAN, relation: rel 1663/12930/16384 fork main blk 0\n2020-07-03 11:01:15.339 JST [60585] CONTEXT: WAL redo at 0/30025E0\nfor Heap2/CLEAN: remxid 505\n2020-07-03 11:01:15.347 JST [60604] LOG: about to be interrupted:\nbackend_type: client backend, state: active, wait_event: PgSleep,\nquery_start: 2020-07-03 11:01:14.337708+09\n2020-07-03 11:01:15.347 JST [60604] STATEMENT: select pg_sleep(30);\n2020-07-03 11:01:15.347 JST [60604] ERROR: canceling statement due to\nconflict with recovery\n2020-07-03 11:01:15.347 JST [60604] DETAIL: User query might have\nneeded to see row versions that must be removed.\n2020-07-03 11:01:15.347 JST [60604] STATEMENT: select pg_sleep(30);\n\nThere are the information WAL record three times and the reason for\ninterruption twice.\n\n> Or did you had in mind to try to avoid using the new\n> “current_replayed_xlog” in xlog.c?\n\nRegarding LogBlockedWalRecordInfo(), it seems to me that the main\nmessage of this logging is to let users know both that the process got\nstuck due to recovery conflict and its reason (lock, database,\nbufferpin etc). Other information such as the details of blocked WAL,\nhow many processes are blocking seems like additional information. So\nI think this information would be better to be shown errcontext or\nerrdetails and we don’t need to create a similar feature as we already\nshow the WAL record being replayed in errcontext.\n\nAlso, this function emits two LOG messages related to each other but\nI'm concerned that it can be hard for users to associate these\nseparate log lines especially when server logs are written at a high\nrate. And I think these two log lines don't follow our error message\nstyle[1].\n\n>\n> >\n> > I wonder if we can show the recovery conflict information in the main\n> > LOG message, the blocker information in errdetail, and use errcontext\n> > with regard to WAL record information. For example:\n> >\n> > LOG: process 500 waiting for recovery conflict on snapshot\n> > DETAIL: conflicting transition id: 100, 200, 300\n> > CONTEXT: WAL redo at 0/3001970 for Heap2/CLEAN: remxid 506\n> Not sure at all if it would be possible to put all the information at\n> the same time.\n> For example in case of shared buffer pin conflict, the blocker is\n> currently known “only” during the RecoveryConflictInterrupt call (so\n> still not known yet when we can “already” report the blocked LSN\n> information).\n> It might also happen that the blocker(s) will never get interrupted (was\n> not blocking anymore before max_standby_streaming_delay has been\n> reached): then it would not be possible to display all the information\n> here (aka when it is interrupted) while we still want to be warned that\n> the WAL replay has been blocked.\n\nI think it's okay with showing different information for different\ntypes of recovery conflict. In buffer pin conflict case, I think we\ncan let the user know the process is waiting for recovery conflict on\nbuffer pin in the main message and the WAL record being replayed in\nerrdetails.\n\n> >\n> >> Outcome Example:\n> >>\n> >> 2020-06-15 06:48:23.774 UTC [6971] LOG: wal record apply is blocked by 2 connection(s), reason: User query might have needed to see row versions that must be removed.\n> >> 2020-06-15 06:48:23.774 UTC [6971] LOG: blocked wal record rmgr: Heap2, lsn: 0/038E2678, received at: 2020-06-15 06:48:23.774098+00, desc: CLEAN, relation: rel 1663/13586/16652 fork main blk 0\n> >> 2020-06-15 06:48:54.773 UTC [7088] LOG: about to interrupt pid: 7088, backend_type: client backend, state: active, wait_event_type: Timeout, wait_event: PgSleep, query_start: 2020-06-15 06:48:14.87672+00\n> > I'm concerned that the above information on the process who is about\n> > to be interrupted is very detailed but I'm not sure it will be helpful\n> > for the users. If the blocker is waiting on something lock, the\n> > information should be logged by log_lock_waits.\n>\n> The blocker could also be in “idle in transaction”, say for example, on\n> the standby (with hot_standby_feedback set to off):\n>\n> standby> begin;\n> standby> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n> standby> select * from bdt;\n>\n> on the master:\n>\n> master> update bdt set generate_series = 15;\n> master> vacuum verbose bdt;\n>\n> would produce:\n>\n> 2020-07-01 09:18:55.256 UTC [32751] LOG: about to interrupt pid: 32751,\n> backend_type: client backend, state: idle in transaction,\n> wait_event_type: Client, wait_event: ClientRead, query_start: 2020-07-01\n> 09:18:17.390572+00\n> 2020-07-01 09:18:55.256 UTC [32751] DETAIL: statement: select * from bdt;\n>\n> I think those information are useful to have (might get rid of\n> wait_event_type though: done in the new attached patch).\n\nSince the backend cancels query at a convenient time\n(CHECK_FOR_INTERRUPT()), I'm concerned that the wait event information\ndisplayed in that log might not be helpful. For example, even if the\nblocker is getting stuck on disk I/O (WAIT_EVENT_BUFFILE_READ etc)\nwhile holding a lock that the recovery process is waiting for, when\nthe blocker is able to cancel the query it's no longer waiting for\ndisk I/O. Also, regarding displaying the backend type, we already show\nthe backend type by setting %d in log_line_prefix.\n\nI still think we should show this additional information (wait event,\nquery start etc) in errdetails even if we want to show this\ninformation in the server logs. Perhaps we can improve\nerrdetail_recovery_conflict()?\n\nAside from the above comments from the perspective of high-level\ndesign, I think we can split this patch into two patches: logging the\nrecovery process is waiting (adding log_recovery_conficts) and logging\nthe details of blockers who is about to be interrupted. I think it'll\nmake review easy.\n\nRegards,\n\n[1] https://www.postgresql.org/docs/devel/error-style-guide.html\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jul 2020 11:20:22 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 7/3/20 4:20 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Wed, 1 Jul 2020 at 21:52, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hey,\n>>\n>> On 6/29/20 10:55 AM, Masahiko Sawada wrote:\n>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>\n>>>\n>>>\n>>> On Thu, 18 Jun 2020 at 16:28, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>> Hi hackers,\n>>>>\n>>>> I've attached a patch to add information during standby recovery conflicts.\n>>>> The motive behind is to be able to get some information:\n>>>>\n>>>> On the apply side\n>>>> On the blocker(s) side\n>>>>\n>>>> Motivation:\n>>>>\n>>>> When a standby recovery conflict occurs it could be useful to get more information to be able to dive deep on the root cause and find a way to avoid/mitigate new occurrences.\n>>> I think this is a good feature. Like log_lock_waits, it will help the\n>>> users to investigate recovery conflict issues.\n>> exactly, thanks for looking at it.\n>>>> Adding this information would make the investigations easier, it could help answering questions like:\n>>>>\n>>>> On which LSN was the WAL apply blocked?\n>>>> What was the purpose of the bocked WAL record?\n>>>> On which relation (if any) was the blocked WAL record related to?\n>>>> What was the blocker(s) doing?\n>>>> When did the blocker(s) started their queries (if any)?\n>>>> What was the blocker(s) waiting for? on which wait event?\n>>>>\n>>>> Technical context and proposal:\n>>>>\n>>>> There is 2 points in this patch:\n>>>>\n>>>> Add the information about the blocked WAL record. This is done in standby.c (ResolveRecoveryConflictWithVirtualXIDs, ResolveRecoveryConflictWithDatabase, StandbyTimeoutHandler)\n>>> I think we already have the information about the WAL record being\n>>> applied in errcontext.\n>> right, but it won’t be displayed in case log_error_verbosity is set to\n>> terse.\n> Yes. I think in this case errcontext or errdetail is more appropriate\n> for this information in this case. Otherwise, we will end up emitting\n> same WAL record information twice in log_error_verbosity = verbose.\n>\n> For instance, here is the server logs when I tested a recovery\n> conflict on buffer pin but it looks redundant:\n>\n> 2020-07-03 11:01:15.339 JST [60585] LOG: wal record apply is blocked\n> by 1 connection(s), reason: User query might have needed to see row\n> versions that must be removed.\n> 2020-07-03 11:01:15.339 JST [60585] CONTEXT: WAL redo at 0/30025E0\n> for Heap2/CLEAN: remxid 505\n> 2020-07-03 11:01:15.339 JST [60585] LOG: blocked wal record rmgr:\n> Heap2, lsn: 0/030025E0, received at: 2020-07-03 11:01:15.338997+09,\n> desc: CLEAN, relation: rel 1663/12930/16384 fork main blk 0\n> 2020-07-03 11:01:15.339 JST [60585] CONTEXT: WAL redo at 0/30025E0\n> for Heap2/CLEAN: remxid 505\n> 2020-07-03 11:01:15.347 JST [60604] LOG: about to be interrupted:\n> backend_type: client backend, state: active, wait_event: PgSleep,\n> query_start: 2020-07-03 11:01:14.337708+09\n> 2020-07-03 11:01:15.347 JST [60604] STATEMENT: select pg_sleep(30);\n> 2020-07-03 11:01:15.347 JST [60604] ERROR: canceling statement due to\n> conflict with recovery\n> 2020-07-03 11:01:15.347 JST [60604] DETAIL: User query might have\n> needed to see row versions that must be removed.\n> 2020-07-03 11:01:15.347 JST [60604] STATEMENT: select pg_sleep(30);\n>\n> There are the information WAL record three times and the reason for\n> interruption twice.\n\n\nFully makes sense, the new patch version attached is now producing:\n\n2020-07-06 06:10:36.022 UTC [14035] LOG: waiting for recovery conflict \non snapshot\n2020-07-06 06:10:36.022 UTC [14035] DETAIL: WAL record received at \n2020-07-06 06:10:36.021963+00.\n Tablespace/database/relation are 1663/13586/16672, fork is main \nand block is 0.\n There is 1 blocking connection(s).\n2020-07-06 06:10:36.022 UTC [14035] CONTEXT: WAL redo at 0/3A05708 for \nHeap2/CLEAN: remxid 972\n\nIt does not provide the pid(s) of the blocking processes as they'll \nappear during the interruption(s).\n\n>> Or did you had in mind to try to avoid using the new\n>> “current_replayed_xlog” in xlog.c?\n> Regarding LogBlockedWalRecordInfo(), it seems to me that the main\n> message of this logging is to let users know both that the process got\n> stuck due to recovery conflict and its reason (lock, database,\n> bufferpin etc). Other information such as the details of blocked WAL,\n> how many processes are blocking seems like additional information. So\n> I think this information would be better to be shown errcontext or\n> errdetails and we don’t need to create a similar feature as we already\n> show the WAL record being replayed in errcontext.\n\n\nI got rid of current_replayed_xlog in the new patch attached and make \nuse of the errcontext argument instead.\n\n>\n> Also, this function emits two LOG messages related to each other but\n> I'm concerned that it can be hard for users to associate these\n> separate log lines especially when server logs are written at a high\n> rate. And I think these two log lines don't follow our error message\n> style[1].\n>\n>>> I wonder if we can show the recovery conflict information in the main\n>>> LOG message, the blocker information in errdetail, and use errcontext\n>>> with regard to WAL record information. For example:\n>>>\n>>> LOG: process 500 waiting for recovery conflict on snapshot\n>>> DETAIL: conflicting transition id: 100, 200, 300\n>>> CONTEXT: WAL redo at 0/3001970 for Heap2/CLEAN: remxid 506\n>> Not sure at all if it would be possible to put all the information at\n>> the same time.\n>> For example in case of shared buffer pin conflict, the blocker is\n>> currently known “only” during the RecoveryConflictInterrupt call (so\n>> still not known yet when we can “already” report the blocked LSN\n>> information).\n>> It might also happen that the blocker(s) will never get interrupted (was\n>> not blocking anymore before max_standby_streaming_delay has been\n>> reached): then it would not be possible to display all the information\n>> here (aka when it is interrupted) while we still want to be warned that\n>> the WAL replay has been blocked.\n> I think it's okay with showing different information for different\n> types of recovery conflict. In buffer pin conflict case, I think we\n> can let the user know the process is waiting for recovery conflict on\n> buffer pin in the main message and the WAL record being replayed in\n> errdetails.\n>\n>>>> Outcome Example:\n>>>>\n>>>> 2020-06-15 06:48:23.774 UTC [6971] LOG: wal record apply is blocked by 2 connection(s), reason: User query might have needed to see row versions that must be removed.\n>>>> 2020-06-15 06:48:23.774 UTC [6971] LOG: blocked wal record rmgr: Heap2, lsn: 0/038E2678, received at: 2020-06-15 06:48:23.774098+00, desc: CLEAN, relation: rel 1663/13586/16652 fork main blk 0\n>>>> 2020-06-15 06:48:54.773 UTC [7088] LOG: about to interrupt pid: 7088, backend_type: client backend, state: active, wait_event_type: Timeout, wait_event: PgSleep, query_start: 2020-06-15 06:48:14.87672+00\n>>> I'm concerned that the above information on the process who is about\n>>> to be interrupted is very detailed but I'm not sure it will be helpful\n>>> for the users. If the blocker is waiting on something lock, the\n>>> information should be logged by log_lock_waits.\n>> The blocker could also be in “idle in transaction”, say for example, on\n>> the standby (with hot_standby_feedback set to off):\n>>\n>> standby> begin;\n>> standby> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n>> standby> select * from bdt;\n>>\n>> on the master:\n>>\n>> master> update bdt set generate_series = 15;\n>> master> vacuum verbose bdt;\n>>\n>> would produce:\n>>\n>> 2020-07-01 09:18:55.256 UTC [32751] LOG: about to interrupt pid: 32751,\n>> backend_type: client backend, state: idle in transaction,\n>> wait_event_type: Client, wait_event: ClientRead, query_start: 2020-07-01\n>> 09:18:17.390572+00\n>> 2020-07-01 09:18:55.256 UTC [32751] DETAIL: statement: select * from bdt;\n>>\n>> I think those information are useful to have (might get rid of\n>> wait_event_type though: done in the new attached patch).\n> Since the backend cancels query at a convenient time\n> (CHECK_FOR_INTERRUPT()), I'm concerned that the wait event information\n> displayed in that log might not be helpful. For example, even if the\n> blocker is getting stuck on disk I/O (WAIT_EVENT_BUFFILE_READ etc)\n> while holding a lock that the recovery process is waiting for, when\n> the blocker is able to cancel the query it's no longer waiting for\n> disk I/O. Also, regarding displaying the backend type, we already show\n> the backend type by setting %d in log_line_prefix.\n>\n> I still think we should show this additional information (wait event,\n> query start etc) in errdetails even if we want to show this\n> information in the server logs. Perhaps we can improve\n> errdetail_recovery_conflict()?\n>\n> Aside from the above comments from the perspective of high-level\n> design, I think we can split this patch into two patches: logging the\n> recovery process is waiting (adding log_recovery_conficts) and logging\n> the details of blockers who is about to be interrupted. I think it'll\n> make review easy.\n\n\nOk. Let's keep this thread for the new attached patch that focus on the \nrecovery process waiting.\n\nI'll create a new one with a dedicated thread for the blockers \ninformation once done.\n\nThanks\n\nBertrand\n\n> Regards,\n>\n> [1] https://www.postgresql.org/docs/devel/error-style-guide.html\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 6 Jul 2020 08:41:49 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Mon, 6 Jul 2020 at 15:42, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n>\n> On 7/3/20 4:20 AM, Masahiko Sawada wrote:\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Wed, 1 Jul 2020 at 21:52, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hey,\n> >>\n> >> On 6/29/20 10:55 AM, Masahiko Sawada wrote:\n> >>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >>>\n> >>>\n> >>>\n> >>> On Thu, 18 Jun 2020 at 16:28, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >>>> Hi hackers,\n> >>>>\n> >>>> I've attached a patch to add information during standby recovery conflicts.\n> >>>> The motive behind is to be able to get some information:\n> >>>>\n> >>>> On the apply side\n> >>>> On the blocker(s) side\n> >>>>\n> >>>> Motivation:\n> >>>>\n> >>>> When a standby recovery conflict occurs it could be useful to get more information to be able to dive deep on the root cause and find a way to avoid/mitigate new occurrences.\n> >>> I think this is a good feature. Like log_lock_waits, it will help the\n> >>> users to investigate recovery conflict issues.\n> >> exactly, thanks for looking at it.\n> >>>> Adding this information would make the investigations easier, it could help answering questions like:\n> >>>>\n> >>>> On which LSN was the WAL apply blocked?\n> >>>> What was the purpose of the bocked WAL record?\n> >>>> On which relation (if any) was the blocked WAL record related to?\n> >>>> What was the blocker(s) doing?\n> >>>> When did the blocker(s) started their queries (if any)?\n> >>>> What was the blocker(s) waiting for? on which wait event?\n> >>>>\n> >>>> Technical context and proposal:\n> >>>>\n> >>>> There is 2 points in this patch:\n> >>>>\n> >>>> Add the information about the blocked WAL record. This is done in standby.c (ResolveRecoveryConflictWithVirtualXIDs, ResolveRecoveryConflictWithDatabase, StandbyTimeoutHandler)\n> >>> I think we already have the information about the WAL record being\n> >>> applied in errcontext.\n> >> right, but it won’t be displayed in case log_error_verbosity is set to\n> >> terse.\n> > Yes. I think in this case errcontext or errdetail is more appropriate\n> > for this information in this case. Otherwise, we will end up emitting\n> > same WAL record information twice in log_error_verbosity = verbose.\n> >\n> > For instance, here is the server logs when I tested a recovery\n> > conflict on buffer pin but it looks redundant:\n> >\n> > 2020-07-03 11:01:15.339 JST [60585] LOG: wal record apply is blocked\n> > by 1 connection(s), reason: User query might have needed to see row\n> > versions that must be removed.\n> > 2020-07-03 11:01:15.339 JST [60585] CONTEXT: WAL redo at 0/30025E0\n> > for Heap2/CLEAN: remxid 505\n> > 2020-07-03 11:01:15.339 JST [60585] LOG: blocked wal record rmgr:\n> > Heap2, lsn: 0/030025E0, received at: 2020-07-03 11:01:15.338997+09,\n> > desc: CLEAN, relation: rel 1663/12930/16384 fork main blk 0\n> > 2020-07-03 11:01:15.339 JST [60585] CONTEXT: WAL redo at 0/30025E0\n> > for Heap2/CLEAN: remxid 505\n> > 2020-07-03 11:01:15.347 JST [60604] LOG: about to be interrupted:\n> > backend_type: client backend, state: active, wait_event: PgSleep,\n> > query_start: 2020-07-03 11:01:14.337708+09\n> > 2020-07-03 11:01:15.347 JST [60604] STATEMENT: select pg_sleep(30);\n> > 2020-07-03 11:01:15.347 JST [60604] ERROR: canceling statement due to\n> > conflict with recovery\n> > 2020-07-03 11:01:15.347 JST [60604] DETAIL: User query might have\n> > needed to see row versions that must be removed.\n> > 2020-07-03 11:01:15.347 JST [60604] STATEMENT: select pg_sleep(30);\n> >\n> > There are the information WAL record three times and the reason for\n> > interruption twice.\n>\n>\n> Fully makes sense, the new patch version attached is now producing:\n>\n> 2020-07-06 06:10:36.022 UTC [14035] LOG: waiting for recovery conflict\n> on snapshot\n\nHow about adding the subject? that is, \"recovery is waiting for\nrecovery conflict on %s\" or \"recovery process <pid> is waiting for\nconflict on %s\".\n\n> 2020-07-06 06:10:36.022 UTC [14035] DETAIL: WAL record received at\n> 2020-07-06 06:10:36.021963+00.\n> Tablespace/database/relation are 1663/13586/16672, fork is main\n> and block is 0.\n> There is 1 blocking connection(s).\n\nTo follow the existing log message, perhaps this can be something like\n\"WAL record received at %s, rel %u/%u/%u, fork %s, blkno %u. %d\nprocesses\". But I'm not sure the errdetail is the best place to\ndisplay the WAL information as I mentioned in the latter part of this\nemail.\n\n> 2020-07-06 06:10:36.022 UTC [14035] CONTEXT: WAL redo at 0/3A05708 for\n> Heap2/CLEAN: remxid 972\n>\n> It does not provide the pid(s) of the blocking processes as they'll\n> appear during the interruption(s).\n>\n> >> Or did you had in mind to try to avoid using the new\n> >> “current_replayed_xlog” in xlog.c?\n> > Regarding LogBlockedWalRecordInfo(), it seems to me that the main\n> > message of this logging is to let users know both that the process got\n> > stuck due to recovery conflict and its reason (lock, database,\n> > bufferpin etc). Other information such as the details of blocked WAL,\n> > how many processes are blocking seems like additional information. So\n> > I think this information would be better to be shown errcontext or\n> > errdetails and we don’t need to create a similar feature as we already\n> > show the WAL record being replayed in errcontext.\n>\n>\n> I got rid of current_replayed_xlog in the new patch attached and make\n> use of the errcontext argument instead.\n>\n> >\n> > Also, this function emits two LOG messages related to each other but\n> > I'm concerned that it can be hard for users to associate these\n> > separate log lines especially when server logs are written at a high\n> > rate. And I think these two log lines don't follow our error message\n> > style[1].\n> >\n> >>> I wonder if we can show the recovery conflict information in the main\n> >>> LOG message, the blocker information in errdetail, and use errcontext\n> >>> with regard to WAL record information. For example:\n> >>>\n> >>> LOG: process 500 waiting for recovery conflict on snapshot\n> >>> DETAIL: conflicting transition id: 100, 200, 300\n> >>> CONTEXT: WAL redo at 0/3001970 for Heap2/CLEAN: remxid 506\n> >> Not sure at all if it would be possible to put all the information at\n> >> the same time.\n> >> For example in case of shared buffer pin conflict, the blocker is\n> >> currently known “only” during the RecoveryConflictInterrupt call (so\n> >> still not known yet when we can “already” report the blocked LSN\n> >> information).\n> >> It might also happen that the blocker(s) will never get interrupted (was\n> >> not blocking anymore before max_standby_streaming_delay has been\n> >> reached): then it would not be possible to display all the information\n> >> here (aka when it is interrupted) while we still want to be warned that\n> >> the WAL replay has been blocked.\n> > I think it's okay with showing different information for different\n> > types of recovery conflict. In buffer pin conflict case, I think we\n> > can let the user know the process is waiting for recovery conflict on\n> > buffer pin in the main message and the WAL record being replayed in\n> > errdetails.\n> >\n> >>>> Outcome Example:\n> >>>>\n> >>>> 2020-06-15 06:48:23.774 UTC [6971] LOG: wal record apply is blocked by 2 connection(s), reason: User query might have needed to see row versions that must be removed.\n> >>>> 2020-06-15 06:48:23.774 UTC [6971] LOG: blocked wal record rmgr: Heap2, lsn: 0/038E2678, received at: 2020-06-15 06:48:23.774098+00, desc: CLEAN, relation: rel 1663/13586/16652 fork main blk 0\n> >>>> 2020-06-15 06:48:54.773 UTC [7088] LOG: about to interrupt pid: 7088, backend_type: client backend, state: active, wait_event_type: Timeout, wait_event: PgSleep, query_start: 2020-06-15 06:48:14.87672+00\n> >>> I'm concerned that the above information on the process who is about\n> >>> to be interrupted is very detailed but I'm not sure it will be helpful\n> >>> for the users. If the blocker is waiting on something lock, the\n> >>> information should be logged by log_lock_waits.\n> >> The blocker could also be in “idle in transaction”, say for example, on\n> >> the standby (with hot_standby_feedback set to off):\n> >>\n> >> standby> begin;\n> >> standby> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n> >> standby> select * from bdt;\n> >>\n> >> on the master:\n> >>\n> >> master> update bdt set generate_series = 15;\n> >> master> vacuum verbose bdt;\n> >>\n> >> would produce:\n> >>\n> >> 2020-07-01 09:18:55.256 UTC [32751] LOG: about to interrupt pid: 32751,\n> >> backend_type: client backend, state: idle in transaction,\n> >> wait_event_type: Client, wait_event: ClientRead, query_start: 2020-07-01\n> >> 09:18:17.390572+00\n> >> 2020-07-01 09:18:55.256 UTC [32751] DETAIL: statement: select * from bdt;\n> >>\n> >> I think those information are useful to have (might get rid of\n> >> wait_event_type though: done in the new attached patch).\n> > Since the backend cancels query at a convenient time\n> > (CHECK_FOR_INTERRUPT()), I'm concerned that the wait event information\n> > displayed in that log might not be helpful. For example, even if the\n> > blocker is getting stuck on disk I/O (WAIT_EVENT_BUFFILE_READ etc)\n> > while holding a lock that the recovery process is waiting for, when\n> > the blocker is able to cancel the query it's no longer waiting for\n> > disk I/O. Also, regarding displaying the backend type, we already show\n> > the backend type by setting %d in log_line_prefix.\n> >\n> > I still think we should show this additional information (wait event,\n> > query start etc) in errdetails even if we want to show this\n> > information in the server logs. Perhaps we can improve\n> > errdetail_recovery_conflict()?\n> >\n> > Aside from the above comments from the perspective of high-level\n> > design, I think we can split this patch into two patches: logging the\n> > recovery process is waiting (adding log_recovery_conficts) and logging\n> > the details of blockers who is about to be interrupted. I think it'll\n> > make review easy.\n>\n>\n> Ok. Let's keep this thread for the new attached patch that focus on the\n> recovery process waiting.\n\nThank you for updating the patch!\n\nI've tested the latest patch. On recovery conflict on lock and on\nbufferpin, if max_standby_streaming_delay is disabled (set to -1), the\nlogs don't appear even if log_recovery_conflicts is true.\n\nHere is random comments on the code:\n\n+ recovery_conflict_main_message = psprintf(\"waiting for\nrecovery conflict on %s\",\n+\nget_procsignal_reason_desc(reason));\n:\n+ ereport(LOG,\n+ (errmsg(\"%s\", recovery_conflict_main_message),\n+ errdetail(\"%s\\n\" \"There is %d blocking\nconnection(s).\", wal_record_detail_str, num_waitlist_entries)));\n\nIt's not translation-support-friendly. I think the message \"waiting\nfor recovery conflict on %s\" should be surrounded by _(). Or we can\njust put it to ereport as follows:\n\nereport(LOG,\n (errmsg(\"waiting for recovery conflicts on %s\",\nget_procsignal_reason_desc(reason))\n ...\n\n---\n+ oldcontext = MemoryContextSwitchTo(ErrorContext);\n+ econtext = error_context_stack;\n+\n+ if (XLogRecGetBlockTag(econtext->arg, 0, &rnode, &forknum, &blknum))\n\nI don't think it's a good idea to rely on error_context_stack because\nother codes might set another error context before reaching here in\nthe future.\n\n---\n+ if (XLogRecGetBlockTag(econtext->arg, 0, &rnode, &forknum, &blknum))\n+ wal_record_detail_str = psprintf(\"WAL record received\nat %s.\\nTablespace/database/relation are %u/%u/%u, fork is %s and\nblock is %u.\",\n+ receipt_time_str,\nrnode.spcNode, rnode.dbNode, rnode.relNode,\n+ forkNames[forknum],\n+ blknum);\n\nThere might be a block tag in block ids other than 0. I'm not sure the\nerrdetail is the best place where we display WAL information. For\ninstance, we display both the relation oid and block number depending\non RM as follows:\n\n2020-07-07 15:50:30.733 JST [13344] LOG: waiting for recovery conflict on lock\n2020-07-07 15:50:30.733 JST [13344] DETAIL: WAL record received at\n2020-07-07 15:50:27.73378+09.\n There is 1 blocking connection(s).\n2020-07-07 15:50:30.733 JST [13344] CONTEXT: WAL redo at 0/3000028\nfor Standby/LOCK: xid 506 db 13586 rel 16384\n\nI wonder if we can display the details of redo WAL information by\nimproving xlog_outdesc() or rm_redo_error_callback() so that it\ndisplays relfilenode, forknum, and block number. What do you think?\n\n---\n+ /* display wal record information */\n+ if (log_recovery_conflicts)\n+ LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n\nI'm concerned that we should log the recovery conflict information\nevery time when a recovery conflict happens from the perspective of\noverheads and the amount of the logs. Can we logs that information\nafter waiting for deadlock_timeouts secs like log_lock_waits or\nwaiting for the fixed duration?\n\n---\n@@ -609,6 +682,10 @@ StandbyTimeoutHandler(void)\n /* forget any pending STANDBY_DEADLOCK_TIMEOUT request */\n disable_timeout(STANDBY_DEADLOCK_TIMEOUT, false);\n\n+ /* display wal record information */\n+ if (log_recovery_conflicts)\n+ LogBlockedWalRecordInfo(-1,\nPROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n+\n SendRecoveryConflictWithBufferPin(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n }\n\nResolveRecoveryConflictWithBufferPin() which sets a timer to call\nStandbyTimeoutHandler() can be called multiple times even if the\nrecovery is waiting for one buffer pin. I think we should avoid\nlogging the same contents multiple times.\n\n---\n-\n+ {\n+ {\"log_recovery_conflicts\", PGC_SUSET, LOGGING_WHAT,\n+ gettext_noop(\"Logs standby recovery conflicts.\"),\n+ NULL\n+ },\n+ &log_recovery_conflicts,\n+ true,\n+ NULL, NULL, NULL\n+ },\n\nOther logging parameters such as log_lock_waits is false by default. I\nthink this parameter can also be false by default but is there any\nreason to enable it by default?\n\nSince this parameter applies only to the startup process, I think it\nshould be PGC_SIGHUP.\n\n---\n+ /* display wal record information */\n+ if (log_recovery_conflicts)\n+ LogBlockedWalRecordInfo(CountDBBackends(dbid),\nPROCSIG_RECOVERY_CONFLICT_DATABASE);\n\nWe log the recovery conflict into the server log but we don't update\nthe process title to append \"waiting\". While discussing the process\ntitle update on recovery conflict, I got the review comment[1] that we\ndon't need to update the process title because no wait occurs when\nrecovery conflict with database happens. As the comment says, recovery\nis canceling the existing processes on the database being removed, but\nnot waiting for something.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/84e4ea5f-80ec-9862-d419-c4433b3c2965%40oss.nttdata.com\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Jul 2020 16:43:27 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 7/7/20 9:43 AM, Masahiko Sawada wrote:\n> Fully makes sense, the new patch version attached is now producing:\n>> 2020-07-06 06:10:36.022 UTC [14035] LOG: waiting for recovery conflict\n>> on snapshot\n> How about adding the subject? that is, \"recovery is waiting for\n> recovery conflict on %s\" or \"recovery process <pid> is waiting for\n> conflict on %s\".\n\n\nThe subject is now added in the new attached patch (I did not include \nthe pid as it is part of the log prefix).\n\nIt now looks like:\n\n2020-07-11 12:00:41.092 UTC [23217] LOG: recovery is waiting for \nrecovery conflict on snapshot\n2020-07-11 12:00:41.092 UTC [23217] DETAIL: There is 1 blocking \nconnection(s).\n2020-07-11 12:00:41.092 UTC [23217] CONTEXT: WAL redo at 0/4A0A6BF0 for \nHeap2/CLEAN: remxid 1128\n WAL record received at 2020-07-11 12:00:41.092231+00\n tbs 1663 db 13586 rel 16805, fork main, blkno 0\n>\n>> 2020-07-06 06:10:36.022 UTC [14035] DETAIL: WAL record received at\n>> 2020-07-06 06:10:36.021963+00.\n>> Tablespace/database/relation are 1663/13586/16672, fork is main\n>> and block is 0.\n>> There is 1 blocking connection(s).\n> To follow the existing log message, perhaps this can be something like\n> \"WAL record received at %s, rel %u/%u/%u, fork %s, blkno %u. %d\n> processes\". But I'm not sure the errdetail is the best place to\n> display the WAL information as I mentioned in the latter part of this\n> email.\n\nmoved to the context and formatted the same way as the current \nStandby/LOCK context.\n\n\n> Ok. Let's keep this thread for the new attached patch that focus on the\n>> recovery process waiting.\n> Thank you for updating the patch!\n>\n> I've tested the latest patch.\n\n\nThank you for testing and reviewing!\n\n\n> On recovery conflict on lock and on\n> bufferpin, if max_standby_streaming_delay is disabled (set to -1), the\n> logs don't appear even if log_recovery_conflicts is true.\n\n\nNice catch! it is fixed in the new attached patch (the log reporting has \nbeen moved out of StandbyTimeoutHandler()).\n\n>\n> Here is random comments on the code:\n>\n> + recovery_conflict_main_message = psprintf(\"waiting for\n> recovery conflict on %s\",\n> +\n> get_procsignal_reason_desc(reason));\n> :\n> + ereport(LOG,\n> + (errmsg(\"%s\", recovery_conflict_main_message),\n> + errdetail(\"%s\\n\" \"There is %d blocking\n> connection(s).\", wal_record_detail_str, num_waitlist_entries)));\n>\n> It's not translation-support-friendly. I think the message \"waiting\n> for recovery conflict on %s\" should be surrounded by _(). Or we can\n> just put it to ereport as follows:\n>\n> ereport(LOG,\n> (errmsg(\"waiting for recovery conflicts on %s\",\n> get_procsignal_reason_desc(reason))\n> ...\n\n\nchanged in the new attached patch.\n\n\n>\n> ---\n> + oldcontext = MemoryContextSwitchTo(ErrorContext);\n> + econtext = error_context_stack;\n> +\n> + if (XLogRecGetBlockTag(econtext->arg, 0, &rnode, &forknum, &blknum))\n>\n> I don't think it's a good idea to rely on error_context_stack because\n> other codes might set another error context before reaching here in\n> the future.\n\n\nright, changed in the new attached patch: this is now done in \nrm_redo_error_callback() and using the XLogReaderState passed as argument.\n\n\n>\n> ---\n> + if (XLogRecGetBlockTag(econtext->arg, 0, &rnode, &forknum, &blknum))\n> + wal_record_detail_str = psprintf(\"WAL record received\n> at %s.\\nTablespace/database/relation are %u/%u/%u, fork is %s and\n> block is %u.\",\n> + receipt_time_str,\n> rnode.spcNode, rnode.dbNode, rnode.relNode,\n> + forkNames[forknum],\n> + blknum);\n>\n> There might be a block tag in block ids other than 0.\n\n\nright, fixed in the new attached patch.\n\n\n> I'm not sure the\n> errdetail is the best place where we display WAL information.\n\n\nmoved to context in the new attached patch.\n\n\n> For\n> instance, we display both the relation oid and block number depending\n> on RM as follows:\n>\n> 2020-07-07 15:50:30.733 JST [13344] LOG: waiting for recovery conflict on lock\n> 2020-07-07 15:50:30.733 JST [13344] DETAIL: WAL record received at\n> 2020-07-07 15:50:27.73378+09.\n> There is 1 blocking connection(s).\n> 2020-07-07 15:50:30.733 JST [13344] CONTEXT: WAL redo at 0/3000028\n> for Standby/LOCK: xid 506 db 13586 rel 16384\n>\n> I wonder if we can display the details of redo WAL information by\n> improving xlog_outdesc() or rm_redo_error_callback() so that it\n> displays relfilenode, forknum, and block number. What do you think?\n\n\nI think that fully make sense to move this to rm_redo_error_callback().\n\nThis is where the information is now been displayed in the new attached \npatch.\n\n\n>\n> ---\n> + /* display wal record information */\n> + if (log_recovery_conflicts)\n> + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n>\n> I'm concerned that we should log the recovery conflict information\n> every time when a recovery conflict happens from the perspective of\n> overheads and the amount of the logs. Can we logs that information\n> after waiting for deadlock_timeouts secs like log_lock_waits or\n> waiting for the fixed duration?\n\n\nThe new attached patch is now waiting for deadlock_timeout duration.\n\n>\n> ---\n> @@ -609,6 +682,10 @@ StandbyTimeoutHandler(void)\n> /* forget any pending STANDBY_DEADLOCK_TIMEOUT request */\n> disable_timeout(STANDBY_DEADLOCK_TIMEOUT, false);\n>\n> + /* display wal record information */\n> + if (log_recovery_conflicts)\n> + LogBlockedWalRecordInfo(-1,\n> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n> +\n> SendRecoveryConflictWithBufferPin(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n> }\n>\n> ResolveRecoveryConflictWithBufferPin() which sets a timer to call\n> StandbyTimeoutHandler() can be called multiple times even if the\n> recovery is waiting for one buffer pin. I think we should avoid\n> logging the same contents multiple times.\n\n\nI do agree, only the first pass is now been logged.\n\n\n>\n> ---\n> -\n> + {\n> + {\"log_recovery_conflicts\", PGC_SUSET, LOGGING_WHAT,\n> + gettext_noop(\"Logs standby recovery conflicts.\"),\n> + NULL\n> + },\n> + &log_recovery_conflicts,\n> + true,\n> + NULL, NULL, NULL\n> + },\n>\n> Other logging parameters such as log_lock_waits is false by default. I\n> think this parameter can also be false by default but is there any\n> reason to enable it by default?\n\n\nnow set to false by default.\n\n\n>\n> Since this parameter applies only to the startup process, I think it\n> should be PGC_SIGHUP.\n\n\nchanged that way.\n\n\n>\n> ---\n> + /* display wal record information */\n> + if (log_recovery_conflicts)\n> + LogBlockedWalRecordInfo(CountDBBackends(dbid),\n> PROCSIG_RECOVERY_CONFLICT_DATABASE);\n>\n> We log the recovery conflict into the server log but we don't update\n> the process title to append \"waiting\". While discussing the process\n> title update on recovery conflict, I got the review comment[1] that we\n> don't need to update the process title because no wait occurs when\n> recovery conflict with database happens. As the comment says, recovery\n> is canceling the existing processes on the database being removed, but\n> not waiting for something.\n\n\nOk, I keep reporting the conflict but changed the wording for this \nparticular case.\n\nThanks\n\nBertrand",
"msg_date": "Sat, 11 Jul 2020 14:55:57 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Sat, 11 Jul 2020 at 21:56, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n>\n> On 7/7/20 9:43 AM, Masahiko Sawada wrote:\n> > Fully makes sense, the new patch version attached is now producing:\n> >> 2020-07-06 06:10:36.022 UTC [14035] LOG: waiting for recovery conflict\n> >> on snapshot\n> > How about adding the subject? that is, \"recovery is waiting for\n> > recovery conflict on %s\" or \"recovery process <pid> is waiting for\n> > conflict on %s\".\n>\n>\n> The subject is now added in the new attached patch (I did not include\n> the pid as it is part of the log prefix).\n>\n> It now looks like:\n>\n> 2020-07-11 12:00:41.092 UTC [23217] LOG: recovery is waiting for\n> recovery conflict on snapshot\n> 2020-07-11 12:00:41.092 UTC [23217] DETAIL: There is 1 blocking\n> connection(s).\n> 2020-07-11 12:00:41.092 UTC [23217] CONTEXT: WAL redo at 0/4A0A6BF0 for\n> Heap2/CLEAN: remxid 1128\n> WAL record received at 2020-07-11 12:00:41.092231+00\n> tbs 1663 db 13586 rel 16805, fork main, blkno 0\n> >\n> >> 2020-07-06 06:10:36.022 UTC [14035] DETAIL: WAL record received at\n> >> 2020-07-06 06:10:36.021963+00.\n> >> Tablespace/database/relation are 1663/13586/16672, fork is main\n> >> and block is 0.\n> >> There is 1 blocking connection(s).\n> > To follow the existing log message, perhaps this can be something like\n> > \"WAL record received at %s, rel %u/%u/%u, fork %s, blkno %u. %d\n> > processes\". But I'm not sure the errdetail is the best place to\n> > display the WAL information as I mentioned in the latter part of this\n> > email.\n>\n> moved to the context and formatted the same way as the current\n> Standby/LOCK context.\n>\n>\n> > Ok. Let's keep this thread for the new attached patch that focus on the\n> >> recovery process waiting.\n> > Thank you for updating the patch!\n> >\n> > I've tested the latest patch.\n>\n>\n> Thank you for testing and reviewing!\n>\n>\n> > On recovery conflict on lock and on\n> > bufferpin, if max_standby_streaming_delay is disabled (set to -1), the\n> > logs don't appear even if log_recovery_conflicts is true.\n>\n>\n> Nice catch! it is fixed in the new attached patch (the log reporting has\n> been moved out of StandbyTimeoutHandler()).\n>\n> >\n> > Here is random comments on the code:\n> >\n> > + recovery_conflict_main_message = psprintf(\"waiting for\n> > recovery conflict on %s\",\n> > +\n> > get_procsignal_reason_desc(reason));\n> > :\n> > + ereport(LOG,\n> > + (errmsg(\"%s\", recovery_conflict_main_message),\n> > + errdetail(\"%s\\n\" \"There is %d blocking\n> > connection(s).\", wal_record_detail_str, num_waitlist_entries)));\n> >\n> > It's not translation-support-friendly. I think the message \"waiting\n> > for recovery conflict on %s\" should be surrounded by _(). Or we can\n> > just put it to ereport as follows:\n> >\n> > ereport(LOG,\n> > (errmsg(\"waiting for recovery conflicts on %s\",\n> > get_procsignal_reason_desc(reason))\n> > ...\n>\n>\n> changed in the new attached patch.\n>\n>\n> >\n> > ---\n> > + oldcontext = MemoryContextSwitchTo(ErrorContext);\n> > + econtext = error_context_stack;\n> > +\n> > + if (XLogRecGetBlockTag(econtext->arg, 0, &rnode, &forknum, &blknum))\n> >\n> > I don't think it's a good idea to rely on error_context_stack because\n> > other codes might set another error context before reaching here in\n> > the future.\n>\n>\n> right, changed in the new attached patch: this is now done in\n> rm_redo_error_callback() and using the XLogReaderState passed as argument.\n>\n>\n> >\n> > ---\n> > + if (XLogRecGetBlockTag(econtext->arg, 0, &rnode, &forknum, &blknum))\n> > + wal_record_detail_str = psprintf(\"WAL record received\n> > at %s.\\nTablespace/database/relation are %u/%u/%u, fork is %s and\n> > block is %u.\",\n> > + receipt_time_str,\n> > rnode.spcNode, rnode.dbNode, rnode.relNode,\n> > + forkNames[forknum],\n> > + blknum);\n> >\n> > There might be a block tag in block ids other than 0.\n>\n>\n> right, fixed in the new attached patch.\n>\n>\n> > I'm not sure the\n> > errdetail is the best place where we display WAL information.\n>\n>\n> moved to context in the new attached patch.\n>\n>\n> > For\n> > instance, we display both the relation oid and block number depending\n> > on RM as follows:\n> >\n> > 2020-07-07 15:50:30.733 JST [13344] LOG: waiting for recovery conflict on lock\n> > 2020-07-07 15:50:30.733 JST [13344] DETAIL: WAL record received at\n> > 2020-07-07 15:50:27.73378+09.\n> > There is 1 blocking connection(s).\n> > 2020-07-07 15:50:30.733 JST [13344] CONTEXT: WAL redo at 0/3000028\n> > for Standby/LOCK: xid 506 db 13586 rel 16384\n> >\n> > I wonder if we can display the details of redo WAL information by\n> > improving xlog_outdesc() or rm_redo_error_callback() so that it\n> > displays relfilenode, forknum, and block number. What do you think?\n>\n>\n> I think that fully make sense to move this to rm_redo_error_callback().\n>\n> This is where the information is now been displayed in the new attached\n> patch.\n>\n>\n> >\n> > ---\n> > + /* display wal record information */\n> > + if (log_recovery_conflicts)\n> > + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n> >\n> > I'm concerned that we should log the recovery conflict information\n> > every time when a recovery conflict happens from the perspective of\n> > overheads and the amount of the logs. Can we logs that information\n> > after waiting for deadlock_timeouts secs like log_lock_waits or\n> > waiting for the fixed duration?\n>\n>\n> The new attached patch is now waiting for deadlock_timeout duration.\n>\n> >\n> > ---\n> > @@ -609,6 +682,10 @@ StandbyTimeoutHandler(void)\n> > /* forget any pending STANDBY_DEADLOCK_TIMEOUT request */\n> > disable_timeout(STANDBY_DEADLOCK_TIMEOUT, false);\n> >\n> > + /* display wal record information */\n> > + if (log_recovery_conflicts)\n> > + LogBlockedWalRecordInfo(-1,\n> > PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n> > +\n> > SendRecoveryConflictWithBufferPin(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n> > }\n> >\n> > ResolveRecoveryConflictWithBufferPin() which sets a timer to call\n> > StandbyTimeoutHandler() can be called multiple times even if the\n> > recovery is waiting for one buffer pin. I think we should avoid\n> > logging the same contents multiple times.\n>\n>\n> I do agree, only the first pass is now been logged.\n>\n>\n> >\n> > ---\n> > -\n> > + {\n> > + {\"log_recovery_conflicts\", PGC_SUSET, LOGGING_WHAT,\n> > + gettext_noop(\"Logs standby recovery conflicts.\"),\n> > + NULL\n> > + },\n> > + &log_recovery_conflicts,\n> > + true,\n> > + NULL, NULL, NULL\n> > + },\n> >\n> > Other logging parameters such as log_lock_waits is false by default. I\n> > think this parameter can also be false by default but is there any\n> > reason to enable it by default?\n>\n>\n> now set to false by default.\n>\n>\n> >\n> > Since this parameter applies only to the startup process, I think it\n> > should be PGC_SIGHUP.\n>\n>\n> changed that way.\n>\n>\n> >\n> > ---\n> > + /* display wal record information */\n> > + if (log_recovery_conflicts)\n> > + LogBlockedWalRecordInfo(CountDBBackends(dbid),\n> > PROCSIG_RECOVERY_CONFLICT_DATABASE);\n> >\n> > We log the recovery conflict into the server log but we don't update\n> > the process title to append \"waiting\". While discussing the process\n> > title update on recovery conflict, I got the review comment[1] that we\n> > don't need to update the process title because no wait occurs when\n> > recovery conflict with database happens. As the comment says, recovery\n> > is canceling the existing processes on the database being removed, but\n> > not waiting for something.\n>\n>\n> Ok, I keep reporting the conflict but changed the wording for this\n> particular case.\n\nThank you for updating the patch! Here is my comments on the latest\nversion patch:\n\n+ /*\n+ * Ensure we are in the startup process\n+ * if we want to log standby recovery conflicts\n+ */\n+ if (AmStartupProcess() && log_recovery_conflicts)\n+ {\n\nThis function must be used by only the startup process. Not sure but\nif we need this check I think we should use an assertion rather than a\ncondition in if statement.\n\nI think the information about relfilenode and forknumber is useful\neven in a normal recovery case. I think we can always add this\ninformation to errcontext. What do you think?\n\n+ GetXLogReceiptTime(&rtime, &fromStream);\n+ if (fromStream)\n+ {\n+ receipt_time_str = pstrdup(timestamptz_to_str(rtime));\n+ appendStringInfo(&buf,\"\\nWAL record received at %s\",\nreceipt_time_str);\n\nNot sure showing the receipt time of WAL record is useful for users in\nthis case. IIUC that the receipt time doesn't correspond to individual\nWAL records whereas the errcontext information is about the particular\nWAL record.\n\n+ for (block_id = 0; block_id <= record->max_block_id; block_id++)\n+ {\n+ if (XLogRecGetBlockTag(record, block_id, &rnode,\n&forknum, &blknum))\n+ appendStringInfo(&buf,\"\\ntbs %u db %u rel %u, fork\n%s, blkno %u\",\n+ rnode.spcNode, rnode.dbNode, rnode.relNode,\n+ forkNames[forknum],\n+ blknum);\n\nHow about showing something like pg_waldump?\n\nCONTEXT: WAL redo at 0/3059100 for Heap2/CLEAN: remxid 506, blkref\n#0: rel 1000/20000/1234 fork main blk 10, blkref #1: rel\n1000/20000/1234 fork main blk 11\n\nor\n\nCONTEXT: WAL redo at 0/3059100 for Heap2/CLEAN: remxid 506\n blkref #0: rel 1000/20000/1234 fork main blk 10\n blkref #1: rel 1000/20000/1234 fork main blk 11\n\nBut the latter format makes grepping difficult.\n\nAlso, I guess the changes in rm_redo_error_callback can also be a\nseparate patch.\n\n+/*\n+ * Display information about the wal record\n+ * apply being blocked\n+ */\n+static void\n+LogBlockedWalRecordInfo(int num_waitlist_entries, ProcSignalReason reason)\n\nI think the function name needs to be updated. The function no longer\nlogs WAL record information.\n\n+{\n+ if (num_waitlist_entries > 0)\n+ if (reason == PROCSIG_RECOVERY_CONFLICT_DATABASE)\n+ ereport(LOG,\n+ (errmsg(\"recovery is experiencing recovery conflict on\n%s\", get_procsignal_reason_desc(reason)),\n+ errdetail(\"There is %d conflicting connection(s).\",\nnum_waitlist_entries)));\n+ else\n+ ereport(LOG,\n+ (errmsg(\"recovery is waiting for recovery conflict on\n%s\", get_procsignal_reason_desc(reason)),\n+ errdetail(\"There is %d blocking connection(s).\",\nnum_waitlist_entries)));\n+ else\n+ ereport(LOG,\n+ (errmsg(\"recovery is waiting for recovery conflict on %s\",\nget_procsignal_reason_desc(reason))));\n+}\n\nHow about displaying actual virtual transaction ids or process ids the\nstartup process is waiting for? For example, the log is going to be:\n\nLOG: recovery is waiting for recovery conflict on snapshot\nDETAIL: Conflicting virtual transaction ids: 100/101, 200/202, 300/303\n\nor\n\nLOG: recovery is waiting for recovery conflict on snapshot\nDETAIL: Conflicting processes: 123, 456, 789\n\nFYI, errdetail_plural() or errdetail_log_plural() can be used for pluralization.\n\n+ tmpWaitlist = waitlist;\n+ while (VirtualTransactionIdIsValid(*tmpWaitlist))\n+ {\n+ tmpWaitlist++;\n+ }\n+\n+ num_waitlist_entries = (tmpWaitlist - waitlist);\n+\n+ /* display wal record information */\n+ if (log_recovery_conflicts &&\n(TimestampDifferenceExceeds(recovery_conflicts_log_time,\nGetCurrentTimestamp(),\n+ DeadlockTimeout))) {\n+ LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n+ recovery_conflicts_log_time = GetCurrentTimestamp();\n+ }\n\nrecovery_conflicts_log_time is not initialized. And shouldn't we\ncompare the current timestamp to the timestamp when the startup\nprocess started waiting?\n\nI think we should call LogBlockedWalRecordInfo() inside of the inner\nwhile loop rather than at the beginning of\nResolveRecoveryConflictWithVirtualXIDs(). In lock conflict cases, the\nstartup process waits until 'ltime', then enters\nResolveRecoveryConflictWithVirtualXIDs() after reaching 'ltime'.\nTherefore, it makes sense to call LogBlockedWalRecordInfo() at the\nbeginning of ResolveRecoveryConflictWithVirtualXIDs(). However, in\nsnapshot and tablespace conflict cases (i.g.\nResolveRecoveryConflictWithSnapshot() and\nResolveRecoveryConflictWithTablespace()), it enters\nResolveRecoveryConflictWithVirtualXIDs() without waits and waits for\nreaching ‘ltime’ inside of the inner while look. So the above\ncondition could always be false.\n\nI wonder if we can have something like the following function:\n\nbool\nLogRecoveryConflict(VirtualTransactionId *waitlist, ProcSignalReason reason,\n TimestampTz wait_start)\n{\n if (!TimestampDifferenceExceeds(wait_start, GetCurrentTimestamp(),\n\nmax_standby_streaming_delay))\n return false;\n\n if (waitlist)\n {\n char *buf;\n\n buf = construct a string containing all process ids (or\nvirtual transaction ids) to resolve from waitlist;\n\n ereport(LOG,\n (errmsg(\"recovery is waiting recovery conflict on %s\",\nget_procsignal_reason_desc(reason)),\n (errdetail_log_plural(\"Conflicting process : %s.\",\n \"Conflicting processes : %s.\",\n the number of processes in *waitlist, buf))));\n }\n else\n ereport(LOG,\n (errmsg(\"recovery is resolving recovery conflict on %s\",\nget_procsignal_reason_desc(reason))));\n\n return true;\n}\n\n'wait_start' is the timestamp when the caller started waiting for the\nrecovery conflict. This function logs resolving recovery conflict with\ndetailed information if needed. The caller call this function when\nlog_recovery_conflicts is enabled as follows:\n\nbool logged = false;\n :\nif (log_recovery_conflicts && !logged)\n logged = LogRecoveryConflict(waitlist, reason, waitStart);\n\n+ <varlistentry id=\"guc-log-recovery-conflicts\"\nxreflabel=\"log_recovery_conflicts\">\n+ <term><varname>log_recovery_conflicts</varname> (<type>boolean</type>)\n+ <indexterm>\n+ <primary><varname>log_recovery_conflicts</varname>\nconfiguration parameter</primary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Controls whether detailed information is produced when a conflict\n+ occurs during standby recovery. The default is <literal>on</literal>.\n+ Only superusers can change this setting.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nI think the documentation needs to be updated.\n\nIIUC this feature is to logging the startup process is waiting due to\nrecovery conflict or waiting for recovery conflict resolution. But it\nlogs only when the startup process waits for longer than a certain\ntime. I think we can improve the documentation in terms of that point.\nAlso, the last sentence is no longer true; log_recovery_conflicts is\nnow PGC_SIGHUP.\n\nBTW 'log_recovery_conflict_waits' might be better name for consistency\nwith log_lock_waits?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 31 Jul 2020 14:12:19 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 7/31/20 7:12 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Sat, 11 Jul 2020 at 21:56, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>\n>> On 7/7/20 9:43 AM, Masahiko Sawada wrote:\n>>> Fully makes sense, the new patch version attached is now producing:\n>>>> 2020-07-06 06:10:36.022 UTC [14035] LOG: waiting for recovery conflict\n>>>> on snapshot\n>>> How about adding the subject? that is, \"recovery is waiting for\n>>> recovery conflict on %s\" or \"recovery process <pid> is waiting for\n>>> conflict on %s\".\n>>\n>> The subject is now added in the new attached patch (I did not include\n>> the pid as it is part of the log prefix).\n>>\n>> It now looks like:\n>>\n>> 2020-07-11 12:00:41.092 UTC [23217] LOG: recovery is waiting for\n>> recovery conflict on snapshot\n>> 2020-07-11 12:00:41.092 UTC [23217] DETAIL: There is 1 blocking\n>> connection(s).\n>> 2020-07-11 12:00:41.092 UTC [23217] CONTEXT: WAL redo at 0/4A0A6BF0 for\n>> Heap2/CLEAN: remxid 1128\n>> WAL record received at 2020-07-11 12:00:41.092231+00\n>> tbs 1663 db 13586 rel 16805, fork main, blkno 0\n>>>> 2020-07-06 06:10:36.022 UTC [14035] DETAIL: WAL record received at\n>>>> 2020-07-06 06:10:36.021963+00.\n>>>> Tablespace/database/relation are 1663/13586/16672, fork is main\n>>>> and block is 0.\n>>>> There is 1 blocking connection(s).\n>>> To follow the existing log message, perhaps this can be something like\n>>> \"WAL record received at %s, rel %u/%u/%u, fork %s, blkno %u. %d\n>>> processes\". But I'm not sure the errdetail is the best place to\n>>> display the WAL information as I mentioned in the latter part of this\n>>> email.\n>> moved to the context and formatted the same way as the current\n>> Standby/LOCK context.\n>>\n>>\n>>> Ok. Let's keep this thread for the new attached patch that focus on the\n>>>> recovery process waiting.\n>>> Thank you for updating the patch!\n>>>\n>>> I've tested the latest patch.\n>>\n>> Thank you for testing and reviewing!\n>>\n>>\n>>> On recovery conflict on lock and on\n>>> bufferpin, if max_standby_streaming_delay is disabled (set to -1), the\n>>> logs don't appear even if log_recovery_conflicts is true.\n>>\n>> Nice catch! it is fixed in the new attached patch (the log reporting has\n>> been moved out of StandbyTimeoutHandler()).\n>>\n>>> Here is random comments on the code:\n>>>\n>>> + recovery_conflict_main_message = psprintf(\"waiting for\n>>> recovery conflict on %s\",\n>>> +\n>>> get_procsignal_reason_desc(reason));\n>>> :\n>>> + ereport(LOG,\n>>> + (errmsg(\"%s\", recovery_conflict_main_message),\n>>> + errdetail(\"%s\\n\" \"There is %d blocking\n>>> connection(s).\", wal_record_detail_str, num_waitlist_entries)));\n>>>\n>>> It's not translation-support-friendly. I think the message \"waiting\n>>> for recovery conflict on %s\" should be surrounded by _(). Or we can\n>>> just put it to ereport as follows:\n>>>\n>>> ereport(LOG,\n>>> (errmsg(\"waiting for recovery conflicts on %s\",\n>>> get_procsignal_reason_desc(reason))\n>>> ...\n>>\n>> changed in the new attached patch.\n>>\n>>\n>>> ---\n>>> + oldcontext = MemoryContextSwitchTo(ErrorContext);\n>>> + econtext = error_context_stack;\n>>> +\n>>> + if (XLogRecGetBlockTag(econtext->arg, 0, &rnode, &forknum, &blknum))\n>>>\n>>> I don't think it's a good idea to rely on error_context_stack because\n>>> other codes might set another error context before reaching here in\n>>> the future.\n>>\n>> right, changed in the new attached patch: this is now done in\n>> rm_redo_error_callback() and using the XLogReaderState passed as argument.\n>>\n>>\n>>> ---\n>>> + if (XLogRecGetBlockTag(econtext->arg, 0, &rnode, &forknum, &blknum))\n>>> + wal_record_detail_str = psprintf(\"WAL record received\n>>> at %s.\\nTablespace/database/relation are %u/%u/%u, fork is %s and\n>>> block is %u.\",\n>>> + receipt_time_str,\n>>> rnode.spcNode, rnode.dbNode, rnode.relNode,\n>>> + forkNames[forknum],\n>>> + blknum);\n>>>\n>>> There might be a block tag in block ids other than 0.\n>>\n>> right, fixed in the new attached patch.\n>>\n>>\n>>> I'm not sure the\n>>> errdetail is the best place where we display WAL information.\n>>\n>> moved to context in the new attached patch.\n>>\n>>\n>>> For\n>>> instance, we display both the relation oid and block number depending\n>>> on RM as follows:\n>>>\n>>> 2020-07-07 15:50:30.733 JST [13344] LOG: waiting for recovery conflict on lock\n>>> 2020-07-07 15:50:30.733 JST [13344] DETAIL: WAL record received at\n>>> 2020-07-07 15:50:27.73378+09.\n>>> There is 1 blocking connection(s).\n>>> 2020-07-07 15:50:30.733 JST [13344] CONTEXT: WAL redo at 0/3000028\n>>> for Standby/LOCK: xid 506 db 13586 rel 16384\n>>>\n>>> I wonder if we can display the details of redo WAL information by\n>>> improving xlog_outdesc() or rm_redo_error_callback() so that it\n>>> displays relfilenode, forknum, and block number. What do you think?\n>>\n>> I think that fully make sense to move this to rm_redo_error_callback().\n>>\n>> This is where the information is now been displayed in the new attached\n>> patch.\n>>\n>>\n>>> ---\n>>> + /* display wal record information */\n>>> + if (log_recovery_conflicts)\n>>> + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n>>>\n>>> I'm concerned that we should log the recovery conflict information\n>>> every time when a recovery conflict happens from the perspective of\n>>> overheads and the amount of the logs. Can we logs that information\n>>> after waiting for deadlock_timeouts secs like log_lock_waits or\n>>> waiting for the fixed duration?\n>>\n>> The new attached patch is now waiting for deadlock_timeout duration.\n>>\n>>> ---\n>>> @@ -609,6 +682,10 @@ StandbyTimeoutHandler(void)\n>>> /* forget any pending STANDBY_DEADLOCK_TIMEOUT request */\n>>> disable_timeout(STANDBY_DEADLOCK_TIMEOUT, false);\n>>>\n>>> + /* display wal record information */\n>>> + if (log_recovery_conflicts)\n>>> + LogBlockedWalRecordInfo(-1,\n>>> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n>>> +\n>>> SendRecoveryConflictWithBufferPin(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN);\n>>> }\n>>>\n>>> ResolveRecoveryConflictWithBufferPin() which sets a timer to call\n>>> StandbyTimeoutHandler() can be called multiple times even if the\n>>> recovery is waiting for one buffer pin. I think we should avoid\n>>> logging the same contents multiple times.\n>>\n>> I do agree, only the first pass is now been logged.\n>>\n>>\n>>> ---\n>>> -\n>>> + {\n>>> + {\"log_recovery_conflicts\", PGC_SUSET, LOGGING_WHAT,\n>>> + gettext_noop(\"Logs standby recovery conflicts.\"),\n>>> + NULL\n>>> + },\n>>> + &log_recovery_conflicts,\n>>> + true,\n>>> + NULL, NULL, NULL\n>>> + },\n>>>\n>>> Other logging parameters such as log_lock_waits is false by default. I\n>>> think this parameter can also be false by default but is there any\n>>> reason to enable it by default?\n>>\n>> now set to false by default.\n>>\n>>\n>>> Since this parameter applies only to the startup process, I think it\n>>> should be PGC_SIGHUP.\n>>\n>> changed that way.\n>>\n>>\n>>> ---\n>>> + /* display wal record information */\n>>> + if (log_recovery_conflicts)\n>>> + LogBlockedWalRecordInfo(CountDBBackends(dbid),\n>>> PROCSIG_RECOVERY_CONFLICT_DATABASE);\n>>>\n>>> We log the recovery conflict into the server log but we don't update\n>>> the process title to append \"waiting\". While discussing the process\n>>> title update on recovery conflict, I got the review comment[1] that we\n>>> don't need to update the process title because no wait occurs when\n>>> recovery conflict with database happens. As the comment says, recovery\n>>> is canceling the existing processes on the database being removed, but\n>>> not waiting for something.\n>>\n>> Ok, I keep reporting the conflict but changed the wording for this\n>> particular case.\n> Thank you for updating the patch! Here is my comments on the latest\n> version patch:\n>\n> + /*\n> + * Ensure we are in the startup process\n> + * if we want to log standby recovery conflicts\n> + */\n> + if (AmStartupProcess() && log_recovery_conflicts)\n> + {\n>\n> This function must be used by only the startup process. Not sure but\n> if we need this check I think we should use an assertion rather than a\n> condition in if statement.\n>\n> I think the information about relfilenode and forknumber is useful\n> even in a normal recovery case. I think we can always add this\n> information to errcontext. What do you think?\nFully agree, that it could be useful outside the context of this patch.\n>\n> + GetXLogReceiptTime(&rtime, &fromStream);\n> + if (fromStream)\n> + {\n> + receipt_time_str = pstrdup(timestamptz_to_str(rtime));\n> + appendStringInfo(&buf,\"\\nWAL record received at %s\",\n> receipt_time_str);\n>\n> Not sure showing the receipt time of WAL record is useful for users in\n> this case. IIUC that the receipt time doesn't correspond to individual\n> WAL records whereas the errcontext information is about the particular\n> WAL record.\n>\n> + for (block_id = 0; block_id <= record->max_block_id; block_id++)\n> + {\n> + if (XLogRecGetBlockTag(record, block_id, &rnode,\n> &forknum, &blknum))\n> + appendStringInfo(&buf,\"\\ntbs %u db %u rel %u, fork\n> %s, blkno %u\",\n> + rnode.spcNode, rnode.dbNode, rnode.relNode,\n> + forkNames[forknum],\n> + blknum);\n>\n> How about showing something like pg_waldump?\n>\n> CONTEXT: WAL redo at 0/3059100 for Heap2/CLEAN: remxid 506, blkref\n> #0: rel 1000/20000/1234 fork main blk 10, blkref #1: rel\n> 1000/20000/1234 fork main blk 11\n>\n> or\n>\n> CONTEXT: WAL redo at 0/3059100 for Heap2/CLEAN: remxid 506\n> blkref #0: rel 1000/20000/1234 fork main blk 10\n> blkref #1: rel 1000/20000/1234 fork main blk 11\n>\n> But the latter format makes grepping difficult.\n>\n> Also, I guess the changes in rm_redo_error_callback can also be a\n> separate patch.\n\nI fully agree with your comments.\n\nAs you have already seen I created a new patch dedicated to the \nrm_redo_error_callback() changes: \nhttps://commitfest.postgresql.org/29/2668/.\n\nThen those changes are not part of the new attached patch related to \nthis thread anymore.\n\n>\n> +/*\n> + * Display information about the wal record\n> + * apply being blocked\n> + */\n> +static void\n> +LogBlockedWalRecordInfo(int num_waitlist_entries, ProcSignalReason reason)\n>\n> I think the function name needs to be updated. The function no longer\n> logs WAL record information.\nright.\n>\n> +{\n> + if (num_waitlist_entries > 0)\n> + if (reason == PROCSIG_RECOVERY_CONFLICT_DATABASE)\n> + ereport(LOG,\n> + (errmsg(\"recovery is experiencing recovery conflict on\n> %s\", get_procsignal_reason_desc(reason)),\n> + errdetail(\"There is %d conflicting connection(s).\",\n> num_waitlist_entries)));\n> + else\n> + ereport(LOG,\n> + (errmsg(\"recovery is waiting for recovery conflict on\n> %s\", get_procsignal_reason_desc(reason)),\n> + errdetail(\"There is %d blocking connection(s).\",\n> num_waitlist_entries)));\n> + else\n> + ereport(LOG,\n> + (errmsg(\"recovery is waiting for recovery conflict on %s\",\n> get_procsignal_reason_desc(reason))));\n> +}\n>\n> How about displaying actual virtual transaction ids or process ids the\n> startup process is waiting for? For example, the log is going to be:\n>\n> LOG: recovery is waiting for recovery conflict on snapshot\n> DETAIL: Conflicting virtual transaction ids: 100/101, 200/202, 300/303\n\nThe new attached patch adds the information related to the virtual \ntransaction ids, so that the outcome is now:\n\n2020-08-10 13:31:00.485 UTC [1760] LOG: recovery is resolving recovery \nconflict on snapshot\n2020-08-10 13:31:00.485 UTC [1760] DETAIL: Conflicting virtual \ntransaction ids: 4/2, 3/2.\n\n>\n> or\n>\n> LOG: recovery is waiting for recovery conflict on snapshot\n> DETAIL: Conflicting processes: 123, 456, 789\nI think it makes more sense to display the conflicting virtual \ntransaction id(s), as the blocking process(es) will be known during the \nstatement cancellation.\n>\n> FYI, errdetail_plural() or errdetail_log_plural() can be used for pluralization.\n>\n> + tmpWaitlist = waitlist;\n> + while (VirtualTransactionIdIsValid(*tmpWaitlist))\n> + {\n> + tmpWaitlist++;\n> + }\n> +\n> + num_waitlist_entries = (tmpWaitlist - waitlist);\n> +\n> + /* display wal record information */\n> + if (log_recovery_conflicts &&\n> (TimestampDifferenceExceeds(recovery_conflicts_log_time,\n> GetCurrentTimestamp(),\n> + DeadlockTimeout))) {\n> + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n> + recovery_conflicts_log_time = GetCurrentTimestamp();\n> + }\n>\n> recovery_conflicts_log_time is not initialized. And shouldn't we\n> compare the current timestamp to the timestamp when the startup\n> process started waiting?\n>\n> I think we should call LogBlockedWalRecordInfo() inside of the inner\n> while loop rather than at the beginning of\n> ResolveRecoveryConflictWithVirtualXIDs(). In lock conflict cases, the\n> startup process waits until 'ltime', then enters\n> ResolveRecoveryConflictWithVirtualXIDs() after reaching 'ltime'.\n> Therefore, it makes sense to call LogBlockedWalRecordInfo() at the\n> beginning of ResolveRecoveryConflictWithVirtualXIDs(). However, in\n> snapshot and tablespace conflict cases (i.g.\n> ResolveRecoveryConflictWithSnapshot() and\n> ResolveRecoveryConflictWithTablespace()), it enters\n> ResolveRecoveryConflictWithVirtualXIDs() without waits and waits for\n> reaching ‘ltime’ inside of the inner while look. So the above\n> condition could always be false.\n\nThat would make the information being displayed after \nmax_standby_streaming_delay is reached for the multiple cases you just \ndescribed.\n\nIt indeed makes more sense to call the function in the inside of the \ninner loop: this is done in the new attached patch.\n\n>\n> I wonder if we can have something like the following function:\n>\n> bool\n> LogRecoveryConflict(VirtualTransactionId *waitlist, ProcSignalReason reason,\n> TimestampTz wait_start)\n> {\n> if (!TimestampDifferenceExceeds(wait_start, GetCurrentTimestamp(),\n>\n> max_standby_streaming_delay))\n> return false;\n>\n> if (waitlist)\n> {\n> char *buf;\n>\n> buf = construct a string containing all process ids (or\n> virtual transaction ids) to resolve from waitlist;\n>\n> ereport(LOG,\n> (errmsg(\"recovery is waiting recovery conflict on %s\",\n> get_procsignal_reason_desc(reason)),\n> (errdetail_log_plural(\"Conflicting process : %s.\",\n> \"Conflicting processes : %s.\",\n> the number of processes in *waitlist, buf))));\n> }\n> else\n> ereport(LOG,\n> (errmsg(\"recovery is resolving recovery conflict on %s\",\n> get_procsignal_reason_desc(reason))));\n>\n> return true;\n> }\n>\n> 'wait_start' is the timestamp when the caller started waiting for the\n> recovery conflict. This function logs resolving recovery conflict with\n> detailed information if needed. The caller call this function when\n> log_recovery_conflicts is enabled as follows:\n>\n> bool logged = false;\n> :\n> if (log_recovery_conflicts && !logged)\n> logged = LogRecoveryConflict(waitlist, reason, waitStart);\n\nIt's more or less what the new attached patch is doing, except that it \ndeals with the waitStart comparison in the while loop (to make the \nchanges to the LogRecoveryConflict() calls outside \nResolveRecoveryConflictWithVirtualXIDs() simpler).\n\nFor the buffer pin case, I moved the call to LogRecoveryConflict to \nStandbyTimeoutHandler (instead of the beginning of \nResolveRecoveryConflictWithBufferPin).\n\nThat way we are consistent across all the cases: we are logging once the \nconflict resolution is actually happening (means we are ready/about to \ncancel what is needed).\n\nThen, the LogRecoveryConflict() is now not reporting that \"recovery is \nwaiting recovery conflict\" anymore but that \"recovery is resolving \nrecovery conflict\" in all the cases.\n\n>\n> + <varlistentry id=\"guc-log-recovery-conflicts\"\n> xreflabel=\"log_recovery_conflicts\">\n> + <term><varname>log_recovery_conflicts</varname> (<type>boolean</type>)\n> + <indexterm>\n> + <primary><varname>log_recovery_conflicts</varname>\n> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Controls whether detailed information is produced when a conflict\n> + occurs during standby recovery. The default is <literal>on</literal>.\n> + Only superusers can change this setting.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n> I think the documentation needs to be updated.\nRight, updated that way in the new attached patch.\n>\n> IIUC this feature is to logging the startup process is waiting due to\n> recovery conflict or waiting for recovery conflict resolution. But it\n> logs only when the startup process waits for longer than a certain\n> time. I think we can improve the documentation in terms of that point.\n> Also, the last sentence is no longer true; log_recovery_conflicts is\n> now PGC_SIGHUP.\n>\n> BTW 'log_recovery_conflict_waits' might be better name for consistency\n> with log_lock_waits?\n\nAs per the changes in this new patch, we are not logging when the \nstartup process starts waiting anymore, but only during the conflict \nresolution.\n\nI changed the GUC to \"log_recovery_conflicts_resolution\" instead (based \non my comment above).\n\nThanks,\n\nBertrand",
"msg_date": "Mon, 10 Aug 2020 16:43:36 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Mon, 10 Aug 2020 at 23:43, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 7/31/20 7:12 AM, Masahiko Sawada wrote:\n> > + tmpWaitlist = waitlist;\n> > + while (VirtualTransactionIdIsValid(*tmpWaitlist))\n> > + {\n> > + tmpWaitlist++;\n> > + }\n> > +\n> > + num_waitlist_entries = (tmpWaitlist - waitlist);\n> > +\n> > + /* display wal record information */\n> > + if (log_recovery_conflicts &&\n> > (TimestampDifferenceExceeds(recovery_conflicts_log_time,\n> > GetCurrentTimestamp(),\n> > + DeadlockTimeout))) {\n> > + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n> > + recovery_conflicts_log_time = GetCurrentTimestamp();\n> > + }\n> >\n> > recovery_conflicts_log_time is not initialized. And shouldn't we\n> > compare the current timestamp to the timestamp when the startup\n> > process started waiting?\n> >\n> > I think we should call LogBlockedWalRecordInfo() inside of the inner\n> > while loop rather than at the beginning of\n> > ResolveRecoveryConflictWithVirtualXIDs(). In lock conflict cases, the\n> > startup process waits until 'ltime', then enters\n> > ResolveRecoveryConflictWithVirtualXIDs() after reaching 'ltime'.\n> > Therefore, it makes sense to call LogBlockedWalRecordInfo() at the\n> > beginning of ResolveRecoveryConflictWithVirtualXIDs(). However, in\n> > snapshot and tablespace conflict cases (i.g.\n> > ResolveRecoveryConflictWithSnapshot() and\n> > ResolveRecoveryConflictWithTablespace()), it enters\n> > ResolveRecoveryConflictWithVirtualXIDs() without waits and waits for\n> > reaching ‘ltime’ inside of the inner while look. So the above\n> > condition could always be false.\n>\n> That would make the information being displayed after\n> max_standby_streaming_delay is reached for the multiple cases you just\n> described.\n\nSorry, it should be deadlock_timeout, not max_standby_streaming_delay.\nOtherwise, the recovery conflict log message is printed when\nresolution, which seems not to achieve the original purpose. Am I\nmissing something?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Aug 2020 17:16:35 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\nOn 8/27/20 10:16 AM, Masahiko Sawada wrote\n>\n> On Mon, 10 Aug 2020 at 23:43, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi,\n>>\n>> On 7/31/20 7:12 AM, Masahiko Sawada wrote:\n>>> + tmpWaitlist = waitlist;\n>>> + while (VirtualTransactionIdIsValid(*tmpWaitlist))\n>>> + {\n>>> + tmpWaitlist++;\n>>> + }\n>>> +\n>>> + num_waitlist_entries = (tmpWaitlist - waitlist);\n>>> +\n>>> + /* display wal record information */\n>>> + if (log_recovery_conflicts &&\n>>> (TimestampDifferenceExceeds(recovery_conflicts_log_time,\n>>> GetCurrentTimestamp(),\n>>> + DeadlockTimeout))) {\n>>> + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n>>> + recovery_conflicts_log_time = GetCurrentTimestamp();\n>>> + }\n>>>\n>>> recovery_conflicts_log_time is not initialized. And shouldn't we\n>>> compare the current timestamp to the timestamp when the startup\n>>> process started waiting?\n>>>\n>>> I think we should call LogBlockedWalRecordInfo() inside of the inner\n>>> while loop rather than at the beginning of\n>>> ResolveRecoveryConflictWithVirtualXIDs(). In lock conflict cases, the\n>>> startup process waits until 'ltime', then enters\n>>> ResolveRecoveryConflictWithVirtualXIDs() after reaching 'ltime'.\n>>> Therefore, it makes sense to call LogBlockedWalRecordInfo() at the\n>>> beginning of ResolveRecoveryConflictWithVirtualXIDs(). However, in\n>>> snapshot and tablespace conflict cases (i.g.\n>>> ResolveRecoveryConflictWithSnapshot() and\n>>> ResolveRecoveryConflictWithTablespace()), it enters\n>>> ResolveRecoveryConflictWithVirtualXIDs() without waits and waits for\n>>> reaching ‘ltime’ inside of the inner while look. So the above\n>>> condition could always be false.\n>> That would make the information being displayed after\n>> max_standby_streaming_delay is reached for the multiple cases you just\n>> described.\n> Sorry, it should be deadlock_timeout, not max_standby_streaming_delay.\n> Otherwise, the recovery conflict log message is printed when\n> resolution, which seems not to achieve the original purpose. Am I\n> missing something?\n\nOk, I understand where the confusion is coming from.\n\nIndeed the new version is now printing the recovery conflict log message \nduring the conflict resolution (while the initial intention was to be \nwarned as soon as the replay had to wait).\n\nThe advantage of the new version is that it would be consistent across \nall the conflicts scenarios (if not, we would get messages during the \nresolution or when the replay started waiting, depending of the conflict \nscenario).\n\nOn the other hand, the cons of the new version is that we would miss \nmessages when no resolution is needed (replay wait duration < \nmax_standby_streaming_delay), but is that really annoying? (As no \ncancellation would occur)\n\nThinking about it, i like the new version (being warned during the \nresolution) as we would get messages only when cancelation will occur \n(which is what the user might want to avoid, so the extra info would be \nuseful).\n\nWhat do you think?\n\nBertrand\n\n\n\n",
"msg_date": "Thu, 27 Aug 2020 13:58:38 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Thu, 27 Aug 2020 at 20:58, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n>\n> On 8/27/20 10:16 AM, Masahiko Sawada wrote\n> >\n> > On Mon, 10 Aug 2020 at 23:43, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hi,\n> >>\n> >> On 7/31/20 7:12 AM, Masahiko Sawada wrote:\n> >>> + tmpWaitlist = waitlist;\n> >>> + while (VirtualTransactionIdIsValid(*tmpWaitlist))\n> >>> + {\n> >>> + tmpWaitlist++;\n> >>> + }\n> >>> +\n> >>> + num_waitlist_entries = (tmpWaitlist - waitlist);\n> >>> +\n> >>> + /* display wal record information */\n> >>> + if (log_recovery_conflicts &&\n> >>> (TimestampDifferenceExceeds(recovery_conflicts_log_time,\n> >>> GetCurrentTimestamp(),\n> >>> + DeadlockTimeout))) {\n> >>> + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n> >>> + recovery_conflicts_log_time = GetCurrentTimestamp();\n> >>> + }\n> >>>\n> >>> recovery_conflicts_log_time is not initialized. And shouldn't we\n> >>> compare the current timestamp to the timestamp when the startup\n> >>> process started waiting?\n> >>>\n> >>> I think we should call LogBlockedWalRecordInfo() inside of the inner\n> >>> while loop rather than at the beginning of\n> >>> ResolveRecoveryConflictWithVirtualXIDs(). In lock conflict cases, the\n> >>> startup process waits until 'ltime', then enters\n> >>> ResolveRecoveryConflictWithVirtualXIDs() after reaching 'ltime'.\n> >>> Therefore, it makes sense to call LogBlockedWalRecordInfo() at the\n> >>> beginning of ResolveRecoveryConflictWithVirtualXIDs(). However, in\n> >>> snapshot and tablespace conflict cases (i.g.\n> >>> ResolveRecoveryConflictWithSnapshot() and\n> >>> ResolveRecoveryConflictWithTablespace()), it enters\n> >>> ResolveRecoveryConflictWithVirtualXIDs() without waits and waits for\n> >>> reaching ‘ltime’ inside of the inner while look. So the above\n> >>> condition could always be false.\n> >> That would make the information being displayed after\n> >> max_standby_streaming_delay is reached for the multiple cases you just\n> >> described.\n> > Sorry, it should be deadlock_timeout, not max_standby_streaming_delay.\n> > Otherwise, the recovery conflict log message is printed when\n> > resolution, which seems not to achieve the original purpose. Am I\n> > missing something?\n>\n> Ok, I understand where the confusion is coming from.\n>\n> Indeed the new version is now printing the recovery conflict log message\n> during the conflict resolution (while the initial intention was to be\n> warned as soon as the replay had to wait).\n>\n> The advantage of the new version is that it would be consistent across\n> all the conflicts scenarios (if not, we would get messages during the\n> resolution or when the replay started waiting, depending of the conflict\n> scenario).\n>\n> On the other hand, the cons of the new version is that we would miss\n> messages when no resolution is needed (replay wait duration <\n> max_standby_streaming_delay), but is that really annoying? (As no\n> cancellation would occur)\n>\n> Thinking about it, i like the new version (being warned during the\n> resolution) as we would get messages only when cancelation will occur\n> (which is what the user might want to avoid, so the extra info would be\n> useful).\n>\n> What do you think?\n\nHmm, I think we print the reason why backends are canceled even of as\nnow by ProcessInterrupts(). With this patch and related patches you\nproposed on another thread, the startup process reports virtual xids\nbeing interrupted, the reason, and LSN of blocked WAL, then processes\nwill also report its virtual xid and reason. Therefore, the new\ninformation added by these patches is only the LSN of blocked WAL.\nAlso, the blocked WAL would be unblocked just after the startup\nprocess reports the resolution message. What use cases are you\nassuming?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 28 Aug 2020 14:03:38 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\nOn 8/28/20 7:03 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Thu, 27 Aug 2020 at 20:58, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>\n>> On 8/27/20 10:16 AM, Masahiko Sawada wrote\n>>> On Mon, 10 Aug 2020 at 23:43, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>> Hi,\n>>>>\n>>>> On 7/31/20 7:12 AM, Masahiko Sawada wrote:\n>>>>> + tmpWaitlist = waitlist;\n>>>>> + while (VirtualTransactionIdIsValid(*tmpWaitlist))\n>>>>> + {\n>>>>> + tmpWaitlist++;\n>>>>> + }\n>>>>> +\n>>>>> + num_waitlist_entries = (tmpWaitlist - waitlist);\n>>>>> +\n>>>>> + /* display wal record information */\n>>>>> + if (log_recovery_conflicts &&\n>>>>> (TimestampDifferenceExceeds(recovery_conflicts_log_time,\n>>>>> GetCurrentTimestamp(),\n>>>>> + DeadlockTimeout))) {\n>>>>> + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n>>>>> + recovery_conflicts_log_time = GetCurrentTimestamp();\n>>>>> + }\n>>>>>\n>>>>> recovery_conflicts_log_time is not initialized. And shouldn't we\n>>>>> compare the current timestamp to the timestamp when the startup\n>>>>> process started waiting?\n>>>>>\n>>>>> I think we should call LogBlockedWalRecordInfo() inside of the inner\n>>>>> while loop rather than at the beginning of\n>>>>> ResolveRecoveryConflictWithVirtualXIDs(). In lock conflict cases, the\n>>>>> startup process waits until 'ltime', then enters\n>>>>> ResolveRecoveryConflictWithVirtualXIDs() after reaching 'ltime'.\n>>>>> Therefore, it makes sense to call LogBlockedWalRecordInfo() at the\n>>>>> beginning of ResolveRecoveryConflictWithVirtualXIDs(). However, in\n>>>>> snapshot and tablespace conflict cases (i.g.\n>>>>> ResolveRecoveryConflictWithSnapshot() and\n>>>>> ResolveRecoveryConflictWithTablespace()), it enters\n>>>>> ResolveRecoveryConflictWithVirtualXIDs() without waits and waits for\n>>>>> reaching ‘ltime’ inside of the inner while look. So the above\n>>>>> condition could always be false.\n>>>> That would make the information being displayed after\n>>>> max_standby_streaming_delay is reached for the multiple cases you just\n>>>> described.\n>>> Sorry, it should be deadlock_timeout, not max_standby_streaming_delay.\n>>> Otherwise, the recovery conflict log message is printed when\n>>> resolution, which seems not to achieve the original purpose. Am I\n>>> missing something?\n>> Ok, I understand where the confusion is coming from.\n>>\n>> Indeed the new version is now printing the recovery conflict log message\n>> during the conflict resolution (while the initial intention was to be\n>> warned as soon as the replay had to wait).\n>>\n>> The advantage of the new version is that it would be consistent across\n>> all the conflicts scenarios (if not, we would get messages during the\n>> resolution or when the replay started waiting, depending of the conflict\n>> scenario).\n>>\n>> On the other hand, the cons of the new version is that we would miss\n>> messages when no resolution is needed (replay wait duration <\n>> max_standby_streaming_delay), but is that really annoying? (As no\n>> cancellation would occur)\n>>\n>> Thinking about it, i like the new version (being warned during the\n>> resolution) as we would get messages only when cancelation will occur\n>> (which is what the user might want to avoid, so the extra info would be\n>> useful).\n>>\n>> What do you think?\n> Hmm, I think we print the reason why backends are canceled even of as\n> now by ProcessInterrupts(). With this patch and related patches you\n> proposed on another thread, the startup process reports virtual xids\n> being interrupted, the reason, and LSN of blocked WAL, then processes\n> will also report its virtual xid and reason. Therefore, the new\n> information added by these patches is only the LSN of blocked WAL.\n\nThat's completely right, let's come back to the original intention of \nthis patch (means, don't wait for the conflict resolution to log messages).\n\nI'll submit a new version once updated.\n\nThanks!\n\nBertrand\n\n\n\n",
"msg_date": "Fri, 28 Aug 2020 16:14:25 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 8/28/20 4:14 PM, Drouvot, Bertrand wrote:\n>\n> On 8/28/20 7:03 AM, Masahiko Sawada wrote:\n>> CAUTION: This email originated from outside of the organization. Do \n>> not click links or open attachments unless you can confirm the sender \n>> and know the content is safe.\n>>\n>>\n>>\n>> On Thu, 27 Aug 2020 at 20:58, Drouvot, Bertrand <bdrouvot@amazon.com> \n>> wrote:\n>>>\n>>> On 8/27/20 10:16 AM, Masahiko Sawada wrote\n>>>> On Mon, 10 Aug 2020 at 23:43, Drouvot, Bertrand \n>>>> <bdrouvot@amazon.com> wrote:\n>>>>> Hi,\n>>>>>\n>>>>> On 7/31/20 7:12 AM, Masahiko Sawada wrote:\n>>>>>> + tmpWaitlist = waitlist;\n>>>>>> + while (VirtualTransactionIdIsValid(*tmpWaitlist))\n>>>>>> + {\n>>>>>> + tmpWaitlist++;\n>>>>>> + }\n>>>>>> +\n>>>>>> + num_waitlist_entries = (tmpWaitlist - waitlist);\n>>>>>> +\n>>>>>> + /* display wal record information */\n>>>>>> + if (log_recovery_conflicts &&\n>>>>>> (TimestampDifferenceExceeds(recovery_conflicts_log_time,\n>>>>>> GetCurrentTimestamp(),\n>>>>>> + DeadlockTimeout))) {\n>>>>>> + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n>>>>>> + recovery_conflicts_log_time = GetCurrentTimestamp();\n>>>>>> + }\n>>>>>>\n>>>>>> recovery_conflicts_log_time is not initialized. And shouldn't we\n>>>>>> compare the current timestamp to the timestamp when the startup\n>>>>>> process started waiting?\n>>>>>>\n>>>>>> I think we should call LogBlockedWalRecordInfo() inside of the inner\n>>>>>> while loop rather than at the beginning of\n>>>>>> ResolveRecoveryConflictWithVirtualXIDs(). In lock conflict cases, \n>>>>>> the\n>>>>>> startup process waits until 'ltime', then enters\n>>>>>> ResolveRecoveryConflictWithVirtualXIDs() after reaching 'ltime'.\n>>>>>> Therefore, it makes sense to call LogBlockedWalRecordInfo() at the\n>>>>>> beginning of ResolveRecoveryConflictWithVirtualXIDs(). However, in\n>>>>>> snapshot and tablespace conflict cases (i.g.\n>>>>>> ResolveRecoveryConflictWithSnapshot() and\n>>>>>> ResolveRecoveryConflictWithTablespace()), it enters\n>>>>>> ResolveRecoveryConflictWithVirtualXIDs() without waits and waits for\n>>>>>> reaching ‘ltime’ inside of the inner while look. So the above\n>>>>>> condition could always be false.\n>>>>> That would make the information being displayed after\n>>>>> max_standby_streaming_delay is reached for the multiple cases you \n>>>>> just\n>>>>> described.\n>>>> Sorry, it should be deadlock_timeout, not max_standby_streaming_delay.\n>>>> Otherwise, the recovery conflict log message is printed when\n>>>> resolution, which seems not to achieve the original purpose. Am I\n>>>> missing something?\n>>> Ok, I understand where the confusion is coming from.\n>>>\n>>> Indeed the new version is now printing the recovery conflict log \n>>> message\n>>> during the conflict resolution (while the initial intention was to be\n>>> warned as soon as the replay had to wait).\n>>>\n>>> The advantage of the new version is that it would be consistent across\n>>> all the conflicts scenarios (if not, we would get messages during the\n>>> resolution or when the replay started waiting, depending of the \n>>> conflict\n>>> scenario).\n>>>\n>>> On the other hand, the cons of the new version is that we would miss\n>>> messages when no resolution is needed (replay wait duration <\n>>> max_standby_streaming_delay), but is that really annoying? (As no\n>>> cancellation would occur)\n>>>\n>>> Thinking about it, i like the new version (being warned during the\n>>> resolution) as we would get messages only when cancelation will occur\n>>> (which is what the user might want to avoid, so the extra info would be\n>>> useful).\n>>>\n>>> What do you think?\n>> Hmm, I think we print the reason why backends are canceled even of as\n>> now by ProcessInterrupts(). With this patch and related patches you\n>> proposed on another thread, the startup process reports virtual xids\n>> being interrupted, the reason, and LSN of blocked WAL, then processes\n>> will also report its virtual xid and reason. Therefore, the new\n>> information added by these patches is only the LSN of blocked WAL.\n>\n> That's completely right, let's come back to the original intention of \n> this patch (means, don't wait for the conflict resolution to log \n> messages).\n>\n> I'll submit a new version once updated.\n\nPlease find attached the new patch.\n\nIt provides the following outcomes depending on the conflict:\n\n2020-10-04 09:08:51.923 UTC [30788] LOG: recovery is waiting recovery \nconflict on buffer pin\n\nOR\n\n2020-10-04 09:52:25.832 UTC [1249] LOG: recovery is waiting recovery \nconflict on snapshot\n2020-10-04 09:52:25.832 UTC [1249] DETAIL: Conflicting virtual \ntransaction ids: 3/2, 2/4.\n\nOR\n\n2020-10-04 09:11:51.717 UTC [30788] LOG: recovery is waiting recovery \nconflict on lock\n2020-10-04 09:11:51.717 UTC [30788] DETAIL: Conflicting virtual \ntransaction id: 2/5.\n\nOR\n\n2020-10-04 09:13:04.104 UTC [30788] LOG: recovery is resolving recovery \nconflict on database\n\nThanks\n\nBertrand",
"msg_date": "Sun, 4 Oct 2020 13:48:22 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "> +extern const char *get_procsignal_reason_desc(ProcSignalReason reason)\n> +\t{\n> +\t\tconst char *reasonDesc = \"unknown reason\";\n> +\n> +\t\tswitch (reason)\n> +\t\t{\n> +\t\t\tcase PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n> +\t\t\t\treasonDesc = \"buffer pin\";\n> +\t\t\t\tbreak;\n\nIt doesn't work to construct sentences from pieces, for translatability\nreasons. Maybe you can return the whole errmsg sentence from this\nroutine instead.\n\n\n\n",
"msg_date": "Sun, 4 Oct 2020 11:10:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 10/4/20 4:10 PM, Alvaro Herrera wrote\n>> +extern const char *get_procsignal_reason_desc(ProcSignalReason reason)\n>> + {\n>> + const char *reasonDesc = \"unknown reason\";\n>> +\n>> + switch (reason)\n>> + {\n>> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n>> + reasonDesc = \"buffer pin\";\n>> + break;\n> It doesn't work to construct sentences from pieces, for translatability\n> reasons. Maybe you can return the whole errmsg sentence from this\n> routine instead.\n>\nThanks for the feedback!\n\nEnclosed a new version that take care of it.\n\nBertrand",
"msg_date": "Mon, 12 Oct 2020 09:05:05 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Sun, 4 Oct 2020 at 20:48, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n>\n> On 8/28/20 4:14 PM, Drouvot, Bertrand wrote:\n> >\n> > On 8/28/20 7:03 AM, Masahiko Sawada wrote:\n> >> CAUTION: This email originated from outside of the organization. Do\n> >> not click links or open attachments unless you can confirm the sender\n> >> and know the content is safe.\n> >>\n> >>\n> >>\n> >> On Thu, 27 Aug 2020 at 20:58, Drouvot, Bertrand <bdrouvot@amazon.com>\n> >> wrote:\n> >>>\n> >>> On 8/27/20 10:16 AM, Masahiko Sawada wrote\n> >>>> On Mon, 10 Aug 2020 at 23:43, Drouvot, Bertrand\n> >>>> <bdrouvot@amazon.com> wrote:\n> >>>>> Hi,\n> >>>>>\n> >>>>> On 7/31/20 7:12 AM, Masahiko Sawada wrote:\n> >>>>>> + tmpWaitlist = waitlist;\n> >>>>>> + while (VirtualTransactionIdIsValid(*tmpWaitlist))\n> >>>>>> + {\n> >>>>>> + tmpWaitlist++;\n> >>>>>> + }\n> >>>>>> +\n> >>>>>> + num_waitlist_entries = (tmpWaitlist - waitlist);\n> >>>>>> +\n> >>>>>> + /* display wal record information */\n> >>>>>> + if (log_recovery_conflicts &&\n> >>>>>> (TimestampDifferenceExceeds(recovery_conflicts_log_time,\n> >>>>>> GetCurrentTimestamp(),\n> >>>>>> + DeadlockTimeout))) {\n> >>>>>> + LogBlockedWalRecordInfo(num_waitlist_entries, reason);\n> >>>>>> + recovery_conflicts_log_time = GetCurrentTimestamp();\n> >>>>>> + }\n> >>>>>>\n> >>>>>> recovery_conflicts_log_time is not initialized. And shouldn't we\n> >>>>>> compare the current timestamp to the timestamp when the startup\n> >>>>>> process started waiting?\n> >>>>>>\n> >>>>>> I think we should call LogBlockedWalRecordInfo() inside of the inner\n> >>>>>> while loop rather than at the beginning of\n> >>>>>> ResolveRecoveryConflictWithVirtualXIDs(). In lock conflict cases,\n> >>>>>> the\n> >>>>>> startup process waits until 'ltime', then enters\n> >>>>>> ResolveRecoveryConflictWithVirtualXIDs() after reaching 'ltime'.\n> >>>>>> Therefore, it makes sense to call LogBlockedWalRecordInfo() at the\n> >>>>>> beginning of ResolveRecoveryConflictWithVirtualXIDs(). However, in\n> >>>>>> snapshot and tablespace conflict cases (i.g.\n> >>>>>> ResolveRecoveryConflictWithSnapshot() and\n> >>>>>> ResolveRecoveryConflictWithTablespace()), it enters\n> >>>>>> ResolveRecoveryConflictWithVirtualXIDs() without waits and waits for\n> >>>>>> reaching ‘ltime’ inside of the inner while look. So the above\n> >>>>>> condition could always be false.\n> >>>>> That would make the information being displayed after\n> >>>>> max_standby_streaming_delay is reached for the multiple cases you\n> >>>>> just\n> >>>>> described.\n> >>>> Sorry, it should be deadlock_timeout, not max_standby_streaming_delay.\n> >>>> Otherwise, the recovery conflict log message is printed when\n> >>>> resolution, which seems not to achieve the original purpose. Am I\n> >>>> missing something?\n> >>> Ok, I understand where the confusion is coming from.\n> >>>\n> >>> Indeed the new version is now printing the recovery conflict log\n> >>> message\n> >>> during the conflict resolution (while the initial intention was to be\n> >>> warned as soon as the replay had to wait).\n> >>>\n> >>> The advantage of the new version is that it would be consistent across\n> >>> all the conflicts scenarios (if not, we would get messages during the\n> >>> resolution or when the replay started waiting, depending of the\n> >>> conflict\n> >>> scenario).\n> >>>\n> >>> On the other hand, the cons of the new version is that we would miss\n> >>> messages when no resolution is needed (replay wait duration <\n> >>> max_standby_streaming_delay), but is that really annoying? (As no\n> >>> cancellation would occur)\n> >>>\n> >>> Thinking about it, i like the new version (being warned during the\n> >>> resolution) as we would get messages only when cancelation will occur\n> >>> (which is what the user might want to avoid, so the extra info would be\n> >>> useful).\n> >>>\n> >>> What do you think?\n> >> Hmm, I think we print the reason why backends are canceled even of as\n> >> now by ProcessInterrupts(). With this patch and related patches you\n> >> proposed on another thread, the startup process reports virtual xids\n> >> being interrupted, the reason, and LSN of blocked WAL, then processes\n> >> will also report its virtual xid and reason. Therefore, the new\n> >> information added by these patches is only the LSN of blocked WAL.\n> >\n> > That's completely right, let's come back to the original intention of\n> > this patch (means, don't wait for the conflict resolution to log\n> > messages).\n> >\n> > I'll submit a new version once updated.\n>\n> Please find attached the new patch.\n>\n> It provides the following outcomes depending on the conflict:\n>\n> 2020-10-04 09:08:51.923 UTC [30788] LOG: recovery is waiting recovery\n> conflict on buffer pin\n>\n> OR\n>\n> 2020-10-04 09:52:25.832 UTC [1249] LOG: recovery is waiting recovery\n> conflict on snapshot\n> 2020-10-04 09:52:25.832 UTC [1249] DETAIL: Conflicting virtual\n> transaction ids: 3/2, 2/4.\n>\n> OR\n>\n> 2020-10-04 09:11:51.717 UTC [30788] LOG: recovery is waiting recovery\n> conflict on lock\n> 2020-10-04 09:11:51.717 UTC [30788] DETAIL: Conflicting virtual\n> transaction id: 2/5.\n>\n> OR\n>\n> 2020-10-04 09:13:04.104 UTC [30788] LOG: recovery is resolving recovery\n> conflict on database\n>\n\nSorry I didn't realize I failed to send review comments. Here is my\ncomments on the previous version (v4) patch (attached to [1]):\n\n---\n@ -3828,6 +3830,8 @@ LockBufferForCleanup(Buffer buffer)\n GetPrivateRefCount(buffer));\n\n bufHdr = GetBufferDescriptor(buffer - 1);\n waitStart = GetCurrentTimestamp();\n logged = false;\n\nI think it's better to avoid calling GetCurrentTimestamp() when the\ntimestamp is not used. In the above case, we can call it only when\nlog_recovery_conflict_waits is true.\n\n---\n+void\n+LogRecoveryConflict(VirtualTransactionId *waitlist, ProcSignalReason\nreason, bool waiting)\n\nIt seems to me that having 'reason' at the first argument is natural.\n\n---\n+ /* Log the recovery conflict */\n+ if (log_recovery_conflict_waits)\n+ LogRecoveryConflict(NULL,\nPROCSIG_RECOVERY_CONFLICT_DATABASE, false);\n\nThere seems no need to log recovery conflicts because no wait occurs\nwhen recovery conflict with database happens.\n\n---\nif (InHotStandby)\n {\n /* Set a timer and wait for that or for the\nLock to be granted */\n+ VirtualTransactionId *backends;\n+ backends =\nGetLockConflicts(&locallock->tag.lock, AccessExclusiveLock, NULL);\n+ if (log_recovery_conflict_waits && !logged) {\n+ LogRecoveryConflict(backends,\nPROCSIG_RECOVERY_CONFLICT_LOCK, true);\n+ logged = true;\n+ }\n+\n ResolveRecoveryConflictWithLock(locallock->tag.lock);\n }\n\nWe log the recovery conflicts without checking deadlock timeout here.\nI think we should call LogRecoveryConflict() after\nResolveRecoveryConflictWithLock() with the check if we're waiting for\nlonger than deadlock_time sec. That way, we can also remove the\nfollowing description in the doc, which seems more appropriate\nbehavior:\n\n+ For the lock conflict case, it does not wait for\n<varname>deadlock_timeout</varname> to be reached.\n\nAlso, in ResolveRecoveryConflictWithVirtualXIDs() called from\nResolveRecoveryConflictWithLock(), we could log the same again here:\n\n+ /* Log the recovery conflict */\n+ if (log_recovery_conflict_waits && !logged\n+ && TimestampDifferenceExceeds(waitStart,\n+\nGetCurrentTimestamp(), DeadlockTimeout))\n+ {\n+ LogRecoveryConflict(waitlist, reason,\nreport_waiting);\n+ logged = true;\n+ }\n\nNote that since the 'report_waiting' is false in this case, waitStart\nis not initialized.\n\nI think we can use 'report_waiting' for this purpose too. The\n'report_waiting' is false in the lock conflict cases where we also\nwant to skip logging a recovery conflict in\nResolveRecoveryConflictWithVirtualXIDs().\n\n---\n@@ -1069,6 +1069,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod\nlockMethodTable)\n PGPROC *proc;\n PGPROC *leader = MyProc->lockGroupLeader;\n int i;\n+ bool logged;\n\nI guess it's better to rename 'logged' to 'logged_recovery_confclit'\nor other to make the purpose clear. This function logs not only\nrecovery conflicts but also deadlock and waits for lock.\n\nAs an alternative idea, possibly we can allow it to report the log\nmultiple times with how long waiting for. We pass the time when the\nwait started to LogRecoveryConflict() and show something like\n\"recovery is still waiting recovery conflict on buffer pin after 100.0\nms\". But we can also leave it as the next improvement.\n\n---\n+extern const char *get_procsignal_reason_desc(ProcSignalReason reason)\n+ {\n+ const char *reasonDesc = \"unknown reason\";\n+\n\nI think we can move this function to standby.c.\n\n---\nThe patch needs to run pgindent. For instance, the following change\ndoesn't follow the coding style:\n\n+\n+ if (waitlist && waiting) {\n+ vxids = waitlist;\n+ count = 0;\n+ initStringInfo(&buf);\n\n---\nCurrently, we report the list of the conflicted virtual transaction\nids but perhaps showing process ids instead seems better. What do you\nthink?\n\nI've attached the patch as an idea of fixing the above comments as\nwell as the comment from Alvaro. I can be applied on top of v4 patch.\n\n[1] https://www.postgresql.org/message-id/29da248a-e21c-a3eb-a051-f1ec79b13d31%40amazon.com\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 14 Oct 2020 07:36:16 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2020-Oct-14, Masahiko Sawada wrote:\n\n> I've attached the patch as an idea of fixing the above comments as\n> well as the comment from Alvaro. I can be applied on top of v4 patch.\n\nOne note about the translation stuff. Currently you have _(\"...\") where\nthe string is produced, and then ereport(.., errmsg(\"%s\", str) where it\nis used. Both those things will attempt to translate the string, which\nisn't great. It is better if we only translate once. You have two\noptions to fix this: one is to change _() to gettext_noop() (which marks\nthe string for translation so that it appears in the message catalog,\nbut it does not return the translation -- it returns the original, and\nthen errmsg() translates at run time). The other is to change errmsg()\nto errmsg_internal() .. so the function returns the translated message\nand errmsg_internal() doesn't apply a translation.\n\nI prefer the first option, because if we ever include a server feature\nto log both the non-translated message alongside the translated one, we\nwill already have both in hand.\n\n\n",
"msg_date": "Tue, 13 Oct 2020 19:44:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Wed, 14 Oct 2020 at 07:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Oct-14, Masahiko Sawada wrote:\n>\n> > I've attached the patch as an idea of fixing the above comments as\n> > well as the comment from Alvaro. I can be applied on top of v4 patch.\n>\n> One note about the translation stuff. Currently you have _(\"...\") where\n> the string is produced, and then ereport(.., errmsg(\"%s\", str) where it\n> is used. Both those things will attempt to translate the string, which\n> isn't great. It is better if we only translate once. You have two\n> options to fix this: one is to change _() to gettext_noop() (which marks\n> the string for translation so that it appears in the message catalog,\n> but it does not return the translation -- it returns the original, and\n> then errmsg() translates at run time). The other is to change errmsg()\n> to errmsg_internal() .. so the function returns the translated message\n> and errmsg_internal() doesn't apply a translation.\n>\n> I prefer the first option, because if we ever include a server feature\n> to log both the non-translated message alongside the translated one, we\n> will already have both in hand.\n\nThanks, I didn't know that. So perhaps ATWrongRelkindError() has the\nsame translation problem? It uses _() when producing the message but\nalso uses errmsg().\n\nI've attached the patch changed accordingly. I also fixed some bugs\naround recovery conflicts on locks and changed the code so that the\nlog shows pids instead of virtual transaction ids since pids are much\neasy to use for the users.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 14 Oct 2020 17:39:20 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "At Wed, 14 Oct 2020 17:39:20 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Wed, 14 Oct 2020 at 07:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2020-Oct-14, Masahiko Sawada wrote:\n> >\n> > > I've attached the patch as an idea of fixing the above comments as\n> > > well as the comment from Alvaro. I can be applied on top of v4 patch.\n> >\n> > One note about the translation stuff. Currently you have _(\"...\") where\n> > the string is produced, and then ereport(.., errmsg(\"%s\", str) where it\n> > is used. Both those things will attempt to translate the string, which\n> > isn't great. It is better if we only translate once. You have two\n> > options to fix this: one is to change _() to gettext_noop() (which marks\n> > the string for translation so that it appears in the message catalog,\n> > but it does not return the translation -- it returns the original, and\n> > then errmsg() translates at run time). The other is to change errmsg()\n> > to errmsg_internal() .. so the function returns the translated message\n> > and errmsg_internal() doesn't apply a translation.\n> >\n> > I prefer the first option, because if we ever include a server feature\n> > to log both the non-translated message alongside the translated one, we\n> > will already have both in hand.\n> \n> Thanks, I didn't know that. So perhaps ATWrongRelkindError() has the\n> same translation problem? It uses _() when producing the message but\n> also uses errmsg().\n> \n> I've attached the patch changed accordingly. I also fixed some bugs\n> around recovery conflicts on locks and changed the code so that the\n> log shows pids instead of virtual transaction ids since pids are much\n> easy to use for the users.\n\nYou're misunderstanding.\n\nereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\nfprintf((translated(\"%s\")), translate(\"hogehoge\")).\n\nSo your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n\nfprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n\nwhich leads to a translation problem.\n\n(errmsg(gettext_noop(\"hogehoge\"))\n\nworks fine. You can see the instance in aclchk.c.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Oct 2020 12:13:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Thu, 15 Oct 2020 at 12:13, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 14 Oct 2020 17:39:20 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > On Wed, 14 Oct 2020 at 07:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2020-Oct-14, Masahiko Sawada wrote:\n> > >\n> > > > I've attached the patch as an idea of fixing the above comments as\n> > > > well as the comment from Alvaro. I can be applied on top of v4 patch.\n> > >\n> > > One note about the translation stuff. Currently you have _(\"...\") where\n> > > the string is produced, and then ereport(.., errmsg(\"%s\", str) where it\n> > > is used. Both those things will attempt to translate the string, which\n> > > isn't great. It is better if we only translate once. You have two\n> > > options to fix this: one is to change _() to gettext_noop() (which marks\n> > > the string for translation so that it appears in the message catalog,\n> > > but it does not return the translation -- it returns the original, and\n> > > then errmsg() translates at run time). The other is to change errmsg()\n> > > to errmsg_internal() .. so the function returns the translated message\n> > > and errmsg_internal() doesn't apply a translation.\n> > >\n> > > I prefer the first option, because if we ever include a server feature\n> > > to log both the non-translated message alongside the translated one, we\n> > > will already have both in hand.\n> >\n> > Thanks, I didn't know that. So perhaps ATWrongRelkindError() has the\n> > same translation problem? It uses _() when producing the message but\n> > also uses errmsg().\n> >\n> > I've attached the patch changed accordingly. I also fixed some bugs\n> > around recovery conflicts on locks and changed the code so that the\n> > log shows pids instead of virtual transaction ids since pids are much\n> > easy to use for the users.\n>\n> You're misunderstanding.\n\nThank you! That's helpful for me.\n\n> ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n> fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n>\n> So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n>\n> fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n>\n> which leads to a translation problem.\n>\n> (errmsg(gettext_noop(\"hogehoge\"))\n\nThis seems equivalent to (errmsg(\"hogehoge\")), right?\n\nI think I could understand translation stuff. Given we only report the\nconst string returned from get_recovery_conflict_desc() without\nplaceholders, the patch needs to use errmsg_internal() instead while\nnot changing _() part. (errmsg(get_recovery_conflict_desc())) is not\ngood (warned by -Wformat-security).\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 15 Oct 2020 14:28:57 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "At Thu, 15 Oct 2020 14:28:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> > ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n> > fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n> >\n> > So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n> >\n> > fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n> >\n> > which leads to a translation problem.\n> >\n> > (errmsg(gettext_noop(\"hogehoge\"))\n> \n> This seems equivalent to (errmsg(\"hogehoge\")), right?\n\nYes and no. However eventually the two works the same way,\n\"(errmsg(gettext_noop(\"hogehoge\"))\" is a shorthand of\n\n1: char *msg = gettext_noop(\"hogehoge\");\n...\n2: .. (errmsg(msg));\n\nThat is, the line 1 only registers a message id \"hogehoge\" and doesn't\ntranslate. The line 2 tries to translate the content of msg and it\nfinds the translation for the message id \"hogehoge\".\n\n> I think I could understand translation stuff. Given we only report the\n> const string returned from get_recovery_conflict_desc() without\n> placeholders, the patch needs to use errmsg_internal() instead while\n> not changing _() part. (errmsg(get_recovery_conflict_desc())) is not\n> good (warned by -Wformat-security).\n\nAh, right. we get a complain if no value parameters added. We can\nsilence it by adding a dummy parameter to errmsg, but I'm not sure\nwhich is preferable.\n\nregards.\n\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Oct 2020 14:52:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Thu, 15 Oct 2020 at 14:52, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 15 Oct 2020 14:28:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > > ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n> > > fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n> > >\n> > > So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n> > >\n> > > fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n> > >\n> > > which leads to a translation problem.\n> > >\n> > > (errmsg(gettext_noop(\"hogehoge\"))\n> >\n> > This seems equivalent to (errmsg(\"hogehoge\")), right?\n>\n> Yes and no. However eventually the two works the same way,\n> \"(errmsg(gettext_noop(\"hogehoge\"))\" is a shorthand of\n>\n> 1: char *msg = gettext_noop(\"hogehoge\");\n> ...\n> 2: .. (errmsg(msg));\n>\n> That is, the line 1 only registers a message id \"hogehoge\" and doesn't\n> translate. The line 2 tries to translate the content of msg and it\n> finds the translation for the message id \"hogehoge\".\n\nUnderstood.\n\n>\n> > I think I could understand translation stuff. Given we only report the\n> > const string returned from get_recovery_conflict_desc() without\n> > placeholders, the patch needs to use errmsg_internal() instead while\n> > not changing _() part. (errmsg(get_recovery_conflict_desc())) is not\n> > good (warned by -Wformat-security).\n>\n> Ah, right. we get a complain if no value parameters added. We can\n> silence it by adding a dummy parameter to errmsg, but I'm not sure\n> which is preferable.\n\nOkay, I'm going to use errmsg_internal() for now until a better idea comes.\n\nI've attached the updated patch that fixed the translation part.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 15 Oct 2020 16:15:51 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 10/15/20 9:15 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Thu, 15 Oct 2020 at 14:52, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> At Thu, 15 Oct 2020 14:28:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n>>>> ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n>>>> fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n>>>>\n>>>> So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n>>>>\n>>>> fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n>>>>\n>>>> which leads to a translation problem.\n>>>>\n>>>> (errmsg(gettext_noop(\"hogehoge\"))\n>>> This seems equivalent to (errmsg(\"hogehoge\")), right?\n>> Yes and no. However eventually the two works the same way,\n>> \"(errmsg(gettext_noop(\"hogehoge\"))\" is a shorthand of\n>>\n>> 1: char *msg = gettext_noop(\"hogehoge\");\n>> ...\n>> 2: .. (errmsg(msg));\n>>\n>> That is, the line 1 only registers a message id \"hogehoge\" and doesn't\n>> translate. The line 2 tries to translate the content of msg and it\n>> finds the translation for the message id \"hogehoge\".\n> Understood.\n>\n>>> I think I could understand translation stuff. Given we only report the\n>>> const string returned from get_recovery_conflict_desc() without\n>>> placeholders, the patch needs to use errmsg_internal() instead while\n>>> not changing _() part. (errmsg(get_recovery_conflict_desc())) is not\n>>> good (warned by -Wformat-security).\n>> Ah, right. we get a complain if no value parameters added. We can\n>> silence it by adding a dummy parameter to errmsg, but I'm not sure\n>> which is preferable.\n> Okay, I'm going to use errmsg_internal() for now until a better idea comes.\n>\n> I've attached the updated patch that fixed the translation part.\n\nThanks for reviewing and helping on this patch!\n\nThe patch tester bot is currently failing due to:\n\n\"proc.c:1290:5: error: ‘standbyWaitStart’ may be used uninitialized in \nthis function [-Werror=maybe-uninitialized]\"\n\nI've attached a new version with the minor change to fix it.\n\nBertrand",
"msg_date": "Tue, 20 Oct 2020 14:58:10 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Tue, 20 Oct 2020 at 22:02, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 10/15/20 9:15 AM, Masahiko Sawada wrote:\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Thu, 15 Oct 2020 at 14:52, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >> At Thu, 15 Oct 2020 14:28:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> >>>> ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n> >>>> fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n> >>>>\n> >>>> So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n> >>>>\n> >>>> fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n> >>>>\n> >>>> which leads to a translation problem.\n> >>>>\n> >>>> (errmsg(gettext_noop(\"hogehoge\"))\n> >>> This seems equivalent to (errmsg(\"hogehoge\")), right?\n> >> Yes and no. However eventually the two works the same way,\n> >> \"(errmsg(gettext_noop(\"hogehoge\"))\" is a shorthand of\n> >>\n> >> 1: char *msg = gettext_noop(\"hogehoge\");\n> >> ...\n> >> 2: .. (errmsg(msg));\n> >>\n> >> That is, the line 1 only registers a message id \"hogehoge\" and doesn't\n> >> translate. The line 2 tries to translate the content of msg and it\n> >> finds the translation for the message id \"hogehoge\".\n> > Understood.\n> >\n> >>> I think I could understand translation stuff. Given we only report the\n> >>> const string returned from get_recovery_conflict_desc() without\n> >>> placeholders, the patch needs to use errmsg_internal() instead while\n> >>> not changing _() part. (errmsg(get_recovery_conflict_desc())) is not\n> >>> good (warned by -Wformat-security).\n> >> Ah, right. we get a complain if no value parameters added. We can\n> >> silence it by adding a dummy parameter to errmsg, but I'm not sure\n> >> which is preferable.\n> > Okay, I'm going to use errmsg_internal() for now until a better idea comes.\n> >\n> > I've attached the updated patch that fixed the translation part.\n>\n> Thanks for reviewing and helping on this patch!\n>\n> The patch tester bot is currently failing due to:\n>\n> \"proc.c:1290:5: error: ‘standbyWaitStart’ may be used uninitialized in\n> this function [-Werror=maybe-uninitialized]\"\n>\n> I've attached a new version with the minor change to fix it.\n>\n\nThank you for updating the patch!\n\nI've looked at the patch and revised a bit the formatting etc.\n\nAfter some thoughts, I think it might be better to report the waiting\ntime as well. it would help users and there is no particular reason\nfor logging the report only once. It also helps make the patch clean\nby reducing the variables such as recovery_conflict_logged. I’ve\nimplemented it in the v8 patch. The log message is now like:\n\n LOG: recovery is still waiting recovery conflict on lock after 1062.601 ms\n DETAIL: Conflicting processes: 35116, 35115, 35114.\n CONTEXT: WAL redo at 0/3000028 for Standby/LOCK: xid 510 db 13185 rel 16384\n\nLOG: recovery is still waiting recovery conflict on lock after 2065.682 ms\nDETAIL: Conflicting processes: 35115, 35114.\nCONTEXT: WAL redo at 0/3000028 for Standby/LOCK: xid 510 db 13185 rel 16384\n\nLOG: recovery is still waiting recovery conflict on lock after 3087.926 ms\nDETAIL: Conflicting process: 35114.\nCONTEXT: WAL redo at 0/3000028 for Standby/LOCK: xid 510 db 13185 rel 16384\n\nWhat do you think?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 27 Oct 2020 09:41:14 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/10/27 9:41, Masahiko Sawada wrote:\n> On Tue, 20 Oct 2020 at 22:02, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>\n>> Hi,\n>>\n>> On 10/15/20 9:15 AM, Masahiko Sawada wrote:\n>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>\n>>>\n>>>\n>>> On Thu, 15 Oct 2020 at 14:52, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>>> At Thu, 15 Oct 2020 14:28:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n>>>>>> ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n>>>>>> fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n>>>>>>\n>>>>>> So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n>>>>>>\n>>>>>> fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n>>>>>>\n>>>>>> which leads to a translation problem.\n>>>>>>\n>>>>>> (errmsg(gettext_noop(\"hogehoge\"))\n>>>>> This seems equivalent to (errmsg(\"hogehoge\")), right?\n>>>> Yes and no. However eventually the two works the same way,\n>>>> \"(errmsg(gettext_noop(\"hogehoge\"))\" is a shorthand of\n>>>>\n>>>> 1: char *msg = gettext_noop(\"hogehoge\");\n>>>> ...\n>>>> 2: .. (errmsg(msg));\n>>>>\n>>>> That is, the line 1 only registers a message id \"hogehoge\" and doesn't\n>>>> translate. The line 2 tries to translate the content of msg and it\n>>>> finds the translation for the message id \"hogehoge\".\n>>> Understood.\n>>>\n>>>>> I think I could understand translation stuff. Given we only report the\n>>>>> const string returned from get_recovery_conflict_desc() without\n>>>>> placeholders, the patch needs to use errmsg_internal() instead while\n>>>>> not changing _() part. (errmsg(get_recovery_conflict_desc())) is not\n>>>>> good (warned by -Wformat-security).\n>>>> Ah, right. we get a complain if no value parameters added. We can\n>>>> silence it by adding a dummy parameter to errmsg, but I'm not sure\n>>>> which is preferable.\n>>> Okay, I'm going to use errmsg_internal() for now until a better idea comes.\n>>>\n>>> I've attached the updated patch that fixed the translation part.\n>>\n>> Thanks for reviewing and helping on this patch!\n>>\n>> The patch tester bot is currently failing due to:\n>>\n>> \"proc.c:1290:5: error: ‘standbyWaitStart’ may be used uninitialized in\n>> this function [-Werror=maybe-uninitialized]\"\n>>\n>> I've attached a new version with the minor change to fix it.\n>>\n> \n> Thank you for updating the patch!\n> \n> I've looked at the patch and revised a bit the formatting etc.\n> \n> After some thoughts, I think it might be better to report the waiting\n> time as well. it would help users and there is no particular reason\n> for logging the report only once. It also helps make the patch clean\n> by reducing the variables such as recovery_conflict_logged. I’ve\n> implemented it in the v8 patch.\n\nI read v8 patch. Here are review comments.\n\nWhen recovery conflict with buffer pin happens, log message is output\nevery deadlock_timeout. Is this intentional behavior? If yes, IMO that's\nnot good because lots of messages can be output.\n\n+\tif (log_recovery_conflict_waits)\n+\t\twaitStart = GetCurrentTimestamp();\n\nWhat happens if log_recovery_conflict_waits is off at the beginning and\nthen it's changed during waiting for the conflict? In this case, waitStart is\nzero, but log_recovery_conflict_waits is on. This may cause some problems?\n\n+\t\t\tif (report_waiting)\n+\t\t\t\tts = GetCurrentTimestamp();\n\nGetCurrentTimestamp() doesn't need to be called every cycle\nin the loop after \"logged\" is true and \"new_status\" is not NULL.\n\n+extern const char *get_procsignal_reason_desc(ProcSignalReason reason);\n\nIs this garbage?\n\nWhen log_lock_waits is enabled, both \"still waiting for ...\" and \"acquired ...\"\nmessages are output. Like this, when log_recovery_conflict_waits is enabled,\nnot only \"still waiting ...\" but also \"resolved ...\" message should be output?\nThe latter message might not need to be output if the conflict is canceled\ndue to max_standby_xxx_delay parameter. The latter message would be\nuseful to know how long the recovery was waiting for the conflict. Thought?\nIt's ok to implement this as a separate patch later, though.\n\n+ Controls whether a log message is produced when the startup process\n+ is waiting longer than longer than <varname>deadlock_timeout</varname>.\n\nTypo: \"longer than longer than\" should be \"longer than\".\n\nProbably the section about hot standby in high-availability.sgml would need\nto be updated.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 29 Oct 2020 00:16:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": ",\n\nOn Thu, 29 Oct 2020 at 00:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/10/27 9:41, Masahiko Sawada wrote:\n> > On Tue, 20 Oct 2020 at 22:02, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 10/15/20 9:15 AM, Masahiko Sawada wrote:\n> >>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >>>\n> >>>\n> >>>\n> >>> On Thu, 15 Oct 2020 at 14:52, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >>>> At Thu, 15 Oct 2020 14:28:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> >>>>>> ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n> >>>>>> fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n> >>>>>>\n> >>>>>> So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n> >>>>>>\n> >>>>>> fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n> >>>>>>\n> >>>>>> which leads to a translation problem.\n> >>>>>>\n> >>>>>> (errmsg(gettext_noop(\"hogehoge\"))\n> >>>>> This seems equivalent to (errmsg(\"hogehoge\")), right?\n> >>>> Yes and no. However eventually the two works the same way,\n> >>>> \"(errmsg(gettext_noop(\"hogehoge\"))\" is a shorthand of\n> >>>>\n> >>>> 1: char *msg = gettext_noop(\"hogehoge\");\n> >>>> ...\n> >>>> 2: .. (errmsg(msg));\n> >>>>\n> >>>> That is, the line 1 only registers a message id \"hogehoge\" and doesn't\n> >>>> translate. The line 2 tries to translate the content of msg and it\n> >>>> finds the translation for the message id \"hogehoge\".\n> >>> Understood.\n> >>>\n> >>>>> I think I could understand translation stuff. Given we only report the\n> >>>>> const string returned from get_recovery_conflict_desc() without\n> >>>>> placeholders, the patch needs to use errmsg_internal() instead while\n> >>>>> not changing _() part. (errmsg(get_recovery_conflict_desc())) is not\n> >>>>> good (warned by -Wformat-security).\n> >>>> Ah, right. we get a complain if no value parameters added. We can\n> >>>> silence it by adding a dummy parameter to errmsg, but I'm not sure\n> >>>> which is preferable.\n> >>> Okay, I'm going to use errmsg_internal() for now until a better idea comes.\n> >>>\n> >>> I've attached the updated patch that fixed the translation part.\n> >>\n> >> Thanks for reviewing and helping on this patch!\n> >>\n> >> The patch tester bot is currently failing due to:\n> >>\n> >> \"proc.c:1290:5: error: ‘standbyWaitStart’ may be used uninitialized in\n> >> this function [-Werror=maybe-uninitialized]\"\n> >>\n> >> I've attached a new version with the minor change to fix it.\n> >>\n> >\n> > Thank you for updating the patch!\n> >\n> > I've looked at the patch and revised a bit the formatting etc.\n> >\n> > After some thoughts, I think it might be better to report the waiting\n> > time as well. it would help users and there is no particular reason\n> > for logging the report only once. It also helps make the patch clean\n> > by reducing the variables such as recovery_conflict_logged. I’ve\n> > implemented it in the v8 patch.\n>\n> I read v8 patch. Here are review comments.\n\nThank you for your review.\n\n>\n> When recovery conflict with buffer pin happens, log message is output\n> every deadlock_timeout. Is this intentional behavior? If yes, IMO that's\n> not good because lots of messages can be output.\n\nAgreed.\n\nif we were to log the recovery conflict only once in bufferpin\nconflict case, we would log it multiple times only in lock conflict\ncase. So I guess it's better to do that in all conflict cases.\n\n>\n> + if (log_recovery_conflict_waits)\n> + waitStart = GetCurrentTimestamp();\n>\n> What happens if log_recovery_conflict_waits is off at the beginning and\n> then it's changed during waiting for the conflict? In this case, waitStart is\n> zero, but log_recovery_conflict_waits is on. This may cause some problems?\n\nHmm, I didn't see a path that happens to reload the config file during\nwaiting for buffer cleanup lock. Even if the startup process receives\nSIGHUP during that, it calls HandleStartupProcInterrupts() at the next\nconvenient time. It could be the beginning of main apply loop or\ninside WaitForWALToBecomeAvailable() and so on but I didn’t see it in\nthe wait loop for taking a buffer cleanup. However, I think it’s\nbetter to use (waitStart > 0) for safety when checking if we log the\nrecovery conflict instead of log_recovery_conflict_waits.\n\n>\n> + if (report_waiting)\n> + ts = GetCurrentTimestamp();\n>\n> GetCurrentTimestamp() doesn't need to be called every cycle\n> in the loop after \"logged\" is true and \"new_status\" is not NULL.\n\nAgreed\n\n>\n> +extern const char *get_procsignal_reason_desc(ProcSignalReason reason);\n>\n> Is this garbage?\n\nYes.\n\n>\n> When log_lock_waits is enabled, both \"still waiting for ...\" and \"acquired ...\"\n> messages are output. Like this, when log_recovery_conflict_waits is enabled,\n> not only \"still waiting ...\" but also \"resolved ...\" message should be output?\n> The latter message might not need to be output if the conflict is canceled\n> due to max_standby_xxx_delay parameter. The latter message would be\n> useful to know how long the recovery was waiting for the conflict. Thought?\n> It's ok to implement this as a separate patch later, though.\n\nThere was a discussion that the latter message without waiting time is\nnot necessarily needed because the canceled process will log\n\"canceling statement due to conflict with XXX\" as you mentioned. I\nagreed with that. But I agree that the latter message with waiting\ntime would help users, for instance when the startup process is\nwaiting for multiple processes and it takes a time to cancel all of\nthem.\n\n>\n> + Controls whether a log message is produced when the startup process\n> + is waiting longer than longer than <varname>deadlock_timeout</varname>.\n>\n> Typo: \"longer than longer than\" should be \"longer than\".\n>\n> Probably the section about hot standby in high-availability.sgml would need\n> to be updated.\n\nAgreed.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 30 Oct 2020 10:29:27 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/10/30 10:29, Masahiko Sawada wrote:\n> ,\n> \n> On Thu, 29 Oct 2020 at 00:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/10/27 9:41, Masahiko Sawada wrote:\n>>> On Tue, 20 Oct 2020 at 22:02, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> On 10/15/20 9:15 AM, Masahiko Sawada wrote:\n>>>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>>>\n>>>>>\n>>>>>\n>>>>> On Thu, 15 Oct 2020 at 14:52, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>>>>> At Thu, 15 Oct 2020 14:28:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n>>>>>>>> ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n>>>>>>>> fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n>>>>>>>>\n>>>>>>>> So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n>>>>>>>>\n>>>>>>>> fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n>>>>>>>>\n>>>>>>>> which leads to a translation problem.\n>>>>>>>>\n>>>>>>>> (errmsg(gettext_noop(\"hogehoge\"))\n>>>>>>> This seems equivalent to (errmsg(\"hogehoge\")), right?\n>>>>>> Yes and no. However eventually the two works the same way,\n>>>>>> \"(errmsg(gettext_noop(\"hogehoge\"))\" is a shorthand of\n>>>>>>\n>>>>>> 1: char *msg = gettext_noop(\"hogehoge\");\n>>>>>> ...\n>>>>>> 2: .. (errmsg(msg));\n>>>>>>\n>>>>>> That is, the line 1 only registers a message id \"hogehoge\" and doesn't\n>>>>>> translate. The line 2 tries to translate the content of msg and it\n>>>>>> finds the translation for the message id \"hogehoge\".\n>>>>> Understood.\n>>>>>\n>>>>>>> I think I could understand translation stuff. Given we only report the\n>>>>>>> const string returned from get_recovery_conflict_desc() without\n>>>>>>> placeholders, the patch needs to use errmsg_internal() instead while\n>>>>>>> not changing _() part. (errmsg(get_recovery_conflict_desc())) is not\n>>>>>>> good (warned by -Wformat-security).\n>>>>>> Ah, right. we get a complain if no value parameters added. We can\n>>>>>> silence it by adding a dummy parameter to errmsg, but I'm not sure\n>>>>>> which is preferable.\n>>>>> Okay, I'm going to use errmsg_internal() for now until a better idea comes.\n>>>>>\n>>>>> I've attached the updated patch that fixed the translation part.\n>>>>\n>>>> Thanks for reviewing and helping on this patch!\n>>>>\n>>>> The patch tester bot is currently failing due to:\n>>>>\n>>>> \"proc.c:1290:5: error: ‘standbyWaitStart’ may be used uninitialized in\n>>>> this function [-Werror=maybe-uninitialized]\"\n>>>>\n>>>> I've attached a new version with the minor change to fix it.\n>>>>\n>>>\n>>> Thank you for updating the patch!\n>>>\n>>> I've looked at the patch and revised a bit the formatting etc.\n>>>\n>>> After some thoughts, I think it might be better to report the waiting\n>>> time as well. it would help users and there is no particular reason\n>>> for logging the report only once. It also helps make the patch clean\n>>> by reducing the variables such as recovery_conflict_logged. I’ve\n>>> implemented it in the v8 patch.\n>>\n>> I read v8 patch. Here are review comments.\n> \n> Thank you for your review.\n> \n>>\n>> When recovery conflict with buffer pin happens, log message is output\n>> every deadlock_timeout. Is this intentional behavior? If yes, IMO that's\n>> not good because lots of messages can be output.\n> \n> Agreed.\n> \n> if we were to log the recovery conflict only once in bufferpin\n> conflict case, we would log it multiple times only in lock conflict\n> case. So I guess it's better to do that in all conflict cases.\n\nYes, I agree that this behavior basically should be consistent between all cases.\n\n> \n>>\n>> + if (log_recovery_conflict_waits)\n>> + waitStart = GetCurrentTimestamp();\n>>\n>> What happens if log_recovery_conflict_waits is off at the beginning and\n>> then it's changed during waiting for the conflict? In this case, waitStart is\n>> zero, but log_recovery_conflict_waits is on. This may cause some problems?\n> \n> Hmm, I didn't see a path that happens to reload the config file during\n> waiting for buffer cleanup lock. Even if the startup process receives\n> SIGHUP during that, it calls HandleStartupProcInterrupts() at the next\n> convenient time. It could be the beginning of main apply loop or\n> inside WaitForWALToBecomeAvailable() and so on but I didn’t see it in\n> the wait loop for taking a buffer cleanup.\n\nYes, you're right. I seem to have read the code wrongly.\n\n> However, I think it’s\n> better to use (waitStart > 0) for safety when checking if we log the\n> recovery conflict instead of log_recovery_conflict_waits.\n> \n>>\n>> + if (report_waiting)\n>> + ts = GetCurrentTimestamp();\n>>\n>> GetCurrentTimestamp() doesn't need to be called every cycle\n>> in the loop after \"logged\" is true and \"new_status\" is not NULL.\n> \n> Agreed\n> \n>>\n>> +extern const char *get_procsignal_reason_desc(ProcSignalReason reason);\n>>\n>> Is this garbage?\n> \n> Yes.\n> \n>>\n>> When log_lock_waits is enabled, both \"still waiting for ...\" and \"acquired ...\"\n>> messages are output. Like this, when log_recovery_conflict_waits is enabled,\n>> not only \"still waiting ...\" but also \"resolved ...\" message should be output?\n>> The latter message might not need to be output if the conflict is canceled\n>> due to max_standby_xxx_delay parameter. The latter message would be\n>> useful to know how long the recovery was waiting for the conflict. Thought?\n>> It's ok to implement this as a separate patch later, though.\n> \n> There was a discussion that the latter message without waiting time is\n> not necessarily needed because the canceled process will log\n> \"canceling statement due to conflict with XXX\" as you mentioned. I\n> agreed with that. But I agree that the latter message with waiting\n> time would help users, for instance when the startup process is\n> waiting for multiple processes and it takes a time to cancel all of\n> them.\n\nI agree that it's useful to output the wait time.\n\nBut as I told in previous email, it's ok to focus on the current patch\nfor now and then implement this as a separate patch later.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 30 Oct 2020 12:25:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 10/30/20 4:25 AM, Fujii Masao wrote:\n> CAUTION: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n>\n> On 2020/10/30 10:29, Masahiko Sawada wrote:\n>> ,\n>>\n>> On Thu, 29 Oct 2020 at 00:16, Fujii Masao \n>> <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/10/27 9:41, Masahiko Sawada wrote:\n>>>> On Tue, 20 Oct 2020 at 22:02, Drouvot, Bertrand \n>>>> <bdrouvot@amazon.com> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> On 10/15/20 9:15 AM, Masahiko Sawada wrote:\n>>>>>> CAUTION: This email originated from outside of the organization. \n>>>>>> Do not click links or open attachments unless you can confirm the \n>>>>>> sender and know the content is safe.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On Thu, 15 Oct 2020 at 14:52, Kyotaro Horiguchi \n>>>>>> <horikyota.ntt@gmail.com> wrote:\n>>>>>>> At Thu, 15 Oct 2020 14:28:57 +0900, Masahiko Sawada \n>>>>>>> <masahiko.sawada@2ndquadrant.com> wrote in\n>>>>>>>>> ereport(..(errmsg(\"%s\", _(\"hogehoge\")))) results in\n>>>>>>>>> fprintf((translated(\"%s\")), translate(\"hogehoge\")).\n>>>>>>>>>\n>>>>>>>>> So your change (errmsg(\"%s\", gettext_noop(\"hogehoge\")) results in\n>>>>>>>>>\n>>>>>>>>> fprintf((translated(\"%s\")), DONT_translate(\"hogehoge\")).\n>>>>>>>>>\n>>>>>>>>> which leads to a translation problem.\n>>>>>>>>>\n>>>>>>>>> (errmsg(gettext_noop(\"hogehoge\"))\n>>>>>>>> This seems equivalent to (errmsg(\"hogehoge\")), right?\n>>>>>>> Yes and no. However eventually the two works the same way,\n>>>>>>> \"(errmsg(gettext_noop(\"hogehoge\"))\" is a shorthand of\n>>>>>>>\n>>>>>>> 1: char *msg = gettext_noop(\"hogehoge\");\n>>>>>>> ...\n>>>>>>> 2: .. (errmsg(msg));\n>>>>>>>\n>>>>>>> That is, the line 1 only registers a message id \"hogehoge\" and \n>>>>>>> doesn't\n>>>>>>> translate. The line 2 tries to translate the content of msg and it\n>>>>>>> finds the translation for the message id \"hogehoge\".\n>>>>>> Understood.\n>>>>>>\n>>>>>>>> I think I could understand translation stuff. Given we only \n>>>>>>>> report the\n>>>>>>>> const string returned from get_recovery_conflict_desc() without\n>>>>>>>> placeholders, the patch needs to use errmsg_internal() instead \n>>>>>>>> while\n>>>>>>>> not changing _() part. (errmsg(get_recovery_conflict_desc())) \n>>>>>>>> is not\n>>>>>>>> good (warned by -Wformat-security).\n>>>>>>> Ah, right. we get a complain if no value parameters added. We can\n>>>>>>> silence it by adding a dummy parameter to errmsg, but I'm not sure\n>>>>>>> which is preferable.\n>>>>>> Okay, I'm going to use errmsg_internal() for now until a better \n>>>>>> idea comes.\n>>>>>>\n>>>>>> I've attached the updated patch that fixed the translation part.\n>>>>>\n>>>>> Thanks for reviewing and helping on this patch!\n>>>>>\n>>>>> The patch tester bot is currently failing due to:\n>>>>>\n>>>>> \"proc.c:1290:5: error: ‘standbyWaitStart’ may be used \n>>>>> uninitialized in\n>>>>> this function [-Werror=maybe-uninitialized]\"\n>>>>>\n>>>>> I've attached a new version with the minor change to fix it.\n>>>>>\n>>>>\n>>>> Thank you for updating the patch!\n>>>>\n>>>> I've looked at the patch and revised a bit the formatting etc.\n>>>>\n>>>> After some thoughts, I think it might be better to report the waiting\n>>>> time as well. it would help users and there is no particular reason\n>>>> for logging the report only once. It also helps make the patch clean\n>>>> by reducing the variables such as recovery_conflict_logged. I’ve\n>>>> implemented it in the v8 patch.\n>>>\n>>> I read v8 patch. Here are review comments.\n>>\n>> Thank you for your review.\n>>\n>>>\n>>> When recovery conflict with buffer pin happens, log message is output\n>>> every deadlock_timeout. Is this intentional behavior? If yes, IMO \n>>> that's\n>>> not good because lots of messages can be output.\n>>\n>> Agreed.\n>>\n>> if we were to log the recovery conflict only once in bufferpin\n>> conflict case, we would log it multiple times only in lock conflict\n>> case. So I guess it's better to do that in all conflict cases.\n>\n> Yes, I agree that this behavior basically should be consistent between \n> all cases.\n>\n>>\n>>>\n>>> + if (log_recovery_conflict_waits)\n>>> + waitStart = GetCurrentTimestamp();\n>>>\n>>> What happens if log_recovery_conflict_waits is off at the beginning and\n>>> then it's changed during waiting for the conflict? In this case, \n>>> waitStart is\n>>> zero, but log_recovery_conflict_waits is on. This may cause some \n>>> problems?\n>>\n>> Hmm, I didn't see a path that happens to reload the config file during\n>> waiting for buffer cleanup lock. Even if the startup process receives\n>> SIGHUP during that, it calls HandleStartupProcInterrupts() at the next\n>> convenient time. It could be the beginning of main apply loop or\n>> inside WaitForWALToBecomeAvailable() and so on but I didn’t see it in\n>> the wait loop for taking a buffer cleanup.\n>\n> Yes, you're right. I seem to have read the code wrongly.\n>\n>> However, I think it’s\n>> better to use (waitStart > 0) for safety when checking if we log the\n>> recovery conflict instead of log_recovery_conflict_waits.\n>>\n>>>\n>>> + if (report_waiting)\n>>> + ts = GetCurrentTimestamp();\n>>>\n>>> GetCurrentTimestamp() doesn't need to be called every cycle\n>>> in the loop after \"logged\" is true and \"new_status\" is not NULL.\n>>\n>> Agreed\n>>\n>>>\n>>> +extern const char *get_procsignal_reason_desc(ProcSignalReason \n>>> reason);\n>>>\n>>> Is this garbage?\n>>\n>> Yes.\n>>\n>>>\n>>> When log_lock_waits is enabled, both \"still waiting for ...\" and \n>>> \"acquired ...\"\n>>> messages are output. Like this, when log_recovery_conflict_waits is \n>>> enabled,\n>>> not only \"still waiting ...\" but also \"resolved ...\" message should \n>>> be output?\n>>> The latter message might not need to be output if the conflict is \n>>> canceled\n>>> due to max_standby_xxx_delay parameter. The latter message would be\n>>> useful to know how long the recovery was waiting for the conflict. \n>>> Thought?\n>>> It's ok to implement this as a separate patch later, though.\n>>\n>> There was a discussion that the latter message without waiting time is\n>> not necessarily needed because the canceled process will log\n>> \"canceling statement due to conflict with XXX\" as you mentioned. I\n>> agreed with that. But I agree that the latter message with waiting\n>> time would help users, for instance when the startup process is\n>> waiting for multiple processes and it takes a time to cancel all of\n>> them.\n>\n> I agree that it's useful to output the wait time.\n>\n> But as I told in previous email, it's ok to focus on the current patch\n> for now and then implement this as a separate patch later.\n>\nThanks for your work and thoughts on this patch!\n\nI've attached a new version that take your remarks (hope i did not miss \nsome of them) into account (but still leave the last one for a separate \npatch later).\n\nBertrand",
"msg_date": "Fri, 30 Oct 2020 10:02:30 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 6:02 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 10/30/20 4:25 AM, Fujii Masao wrote:\n> > CAUTION: This email originated from outside of the organization. Do\n> > not click links or open attachments unless you can confirm the sender\n> > and know the content is safe.\n> >\n> >\n> >\n> > On 2020/10/30 10:29, Masahiko Sawada wrote:\n> >> ,\n> >>\n> >> On Thu, 29 Oct 2020 at 00:16, Fujii Masao\n> >> <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>> I read v8 patch. Here are review comments.\n> >>\n> >> Thank you for your review.\n> >>\n\nThank you for updating the patch!\n\nLooking at the latest version patch\n(v8-0002-Log-the-standby-recovery-conflict-waits.patch), I think it\ndoesn't address some comments from Fujii-san.\n\n> >>>\n> >>> When recovery conflict with buffer pin happens, log message is output\n> >>> every deadlock_timeout. Is this intentional behavior? If yes, IMO\n> >>> that's\n> >>> not good because lots of messages can be output.\n> >>\n> >> Agreed.\n\nI think the latest patch doesn't fix the above comment. Log message\nfor recovery conflict on buffer pin is logged every deadlock_timeout.\n\n> >>\n> >> if we were to log the recovery conflict only once in bufferpin\n> >> conflict case, we would log it multiple times only in lock conflict\n> >> case. So I guess it's better to do that in all conflict cases.\n> >\n> > Yes, I agree that this behavior basically should be consistent between\n> > all cases.\n\nThe latest patch seems not to address this comment as well.\n\n> >\n> >>\n> >>>\n> >>> + if (log_recovery_conflict_waits)\n> >>> + waitStart = GetCurrentTimestamp();\n> >>>\n> >>> What happens if log_recovery_conflict_waits is off at the beginning and\n> >>> then it's changed during waiting for the conflict? In this case,\n> >>> waitStart is\n> >>> zero, but log_recovery_conflict_waits is on. This may cause some\n> >>> problems?\n> >>\n> >> Hmm, I didn't see a path that happens to reload the config file during\n> >> waiting for buffer cleanup lock. Even if the startup process receives\n> >> SIGHUP during that, it calls HandleStartupProcInterrupts() at the next\n> >> convenient time. It could be the beginning of main apply loop or\n> >> inside WaitForWALToBecomeAvailable() and so on but I didn’t see it in\n> >> the wait loop for taking a buffer cleanup.\n> >\n> > Yes, you're right. I seem to have read the code wrongly.\n> >\n> >> However, I think it’s\n> >> better to use (waitStart > 0) for safety when checking if we log the\n> >> recovery conflict instead of log_recovery_conflict_waits.\n> >>\n> >>>\n> >>> + if (report_waiting)\n> >>> + ts = GetCurrentTimestamp();\n> >>>\n> >>> GetCurrentTimestamp() doesn't need to be called every cycle\n> >>> in the loop after \"logged\" is true and \"new_status\" is not NULL.\n> >>\n> >> Agreed\n> >>\n> >>>\n> >>> +extern const char *get_procsignal_reason_desc(ProcSignalReason\n> >>> reason);\n> >>>\n> >>> Is this garbage?\n> >>\n> >> Yes.\n> >>\n> >>>\n> >>> When log_lock_waits is enabled, both \"still waiting for ...\" and\n> >>> \"acquired ...\"\n> >>> messages are output. Like this, when log_recovery_conflict_waits is\n> >>> enabled,\n> >>> not only \"still waiting ...\" but also \"resolved ...\" message should\n> >>> be output?\n> >>> The latter message might not need to be output if the conflict is\n> >>> canceled\n> >>> due to max_standby_xxx_delay parameter. The latter message would be\n> >>> useful to know how long the recovery was waiting for the conflict.\n> >>> Thought?\n> >>> It's ok to implement this as a separate patch later, though.\n> >>\n> >> There was a discussion that the latter message without waiting time is\n> >> not necessarily needed because the canceled process will log\n> >> \"canceling statement due to conflict with XXX\" as you mentioned. I\n> >> agreed with that. But I agree that the latter message with waiting\n> >> time would help users, for instance when the startup process is\n> >> waiting for multiple processes and it takes a time to cancel all of\n> >> them.\n> >\n> > I agree that it's useful to output the wait time.\n> >\n> > But as I told in previous email, it's ok to focus on the current patch\n> > for now and then implement this as a separate patch later.\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 6 Nov 2020 11:21:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 11/6/20 3:21 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Fri, Oct 30, 2020 at 6:02 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi,\n>>\n>> On 10/30/20 4:25 AM, Fujii Masao wrote:\n>>> CAUTION: This email originated from outside of the organization. Do\n>>> not click links or open attachments unless you can confirm the sender\n>>> and know the content is safe.\n>>>\n>>>\n>>>\n>>> On 2020/10/30 10:29, Masahiko Sawada wrote:\n>>>> ,\n>>>>\n>>>> On Thu, 29 Oct 2020 at 00:16, Fujii Masao\n>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>> I read v8 patch. Here are review comments.\n>>>> Thank you for your review.\n>>>>\n> Thank you for updating the patch!\n>\n> Looking at the latest version patch\n> (v8-0002-Log-the-standby-recovery-conflict-waits.patch), I think it\n> doesn't address some comments from Fujii-san.\n>\n>>>>> When recovery conflict with buffer pin happens, log message is output\n>>>>> every deadlock_timeout. Is this intentional behavior? If yes, IMO\n>>>>> that's\n>>>>> not good because lots of messages can be output.\n>>>> Agreed.\n> I think the latest patch doesn't fix the above comment. Log message\n> for recovery conflict on buffer pin is logged every deadlock_timeout.\n>\n>>>> if we were to log the recovery conflict only once in bufferpin\n>>>> conflict case, we would log it multiple times only in lock conflict\n>>>> case. So I guess it's better to do that in all conflict cases.\n>>> Yes, I agree that this behavior basically should be consistent between\n>>> all cases.\n> The latest patch seems not to address this comment as well.\n\nOh, I missed those ones, thanks for the feedback.\n\nNew version attached, so that recovery conflict will be logged only once \nalso for buffer pin and lock cases.\n\nBertrand",
"msg_date": "Mon, 9 Nov 2020 06:45:10 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 2:49 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 11/6/20 3:21 AM, Masahiko Sawada wrote:\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Fri, Oct 30, 2020 at 6:02 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hi,\n> >>\n> >> On 10/30/20 4:25 AM, Fujii Masao wrote:\n> >>> CAUTION: This email originated from outside of the organization. Do\n> >>> not click links or open attachments unless you can confirm the sender\n> >>> and know the content is safe.\n> >>>\n> >>>\n> >>>\n> >>> On 2020/10/30 10:29, Masahiko Sawada wrote:\n> >>>> ,\n> >>>>\n> >>>> On Thu, 29 Oct 2020 at 00:16, Fujii Masao\n> >>>> <masao.fujii@oss.nttdata.com> wrote:\n> >>>>> I read v8 patch. Here are review comments.\n> >>>> Thank you for your review.\n> >>>>\n> > Thank you for updating the patch!\n> >\n> > Looking at the latest version patch\n> > (v8-0002-Log-the-standby-recovery-conflict-waits.patch), I think it\n> > doesn't address some comments from Fujii-san.\n> >\n> >>>>> When recovery conflict with buffer pin happens, log message is output\n> >>>>> every deadlock_timeout. Is this intentional behavior? If yes, IMO\n> >>>>> that's\n> >>>>> not good because lots of messages can be output.\n> >>>> Agreed.\n> > I think the latest patch doesn't fix the above comment. Log message\n> > for recovery conflict on buffer pin is logged every deadlock_timeout.\n> >\n> >>>> if we were to log the recovery conflict only once in bufferpin\n> >>>> conflict case, we would log it multiple times only in lock conflict\n> >>>> case. So I guess it's better to do that in all conflict cases.\n> >>> Yes, I agree that this behavior basically should be consistent between\n> >>> all cases.\n> > The latest patch seems not to address this comment as well.\n>\n> Oh, I missed those ones, thanks for the feedback.\n>\n> New version attached, so that recovery conflict will be logged only once\n> also for buffer pin and lock cases.\n\nThank you for updating the patch.\n\nHere are review comments.\n\n+ if (report_waiting && (!logged_recovery_conflict ||\nnew_status == NULL))\n+ ts = GetCurrentTimestamp();\n\nThe condition will always be true if log_recovery_conflict_wait is\nfalse and report_waiting is true, leading to unnecessary calling of\nGetCurrentTimestamp().\n\n---\n+ <para>\n+ You can control whether a log message is produced when the startup process\n+ is waiting longer than <varname>deadlock_timeout</varname> for recovery\n+ conflicts. This is controled by the <xref\nlinkend=\"guc-log-recovery-conflict-waits\"/>\n+ parameter.\n+ </para>\n\ns/controled/controlled/\n\n---\n if (report_waiting)\n waitStart = GetCurrentTimestamp();\n\nSimilarly, we have the above code but we don't need to call\nGetCurrentTimestamp() if update_process_title is false, even if\nreport_waiting is true.\n\nI've attached the patch that fixes the above comments. It can be\napplied on top of your v8 patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 16 Nov 2020 14:44:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 11/16/20 6:44 AM, Masahiko Sawada wrote:\n> Thank you for updating the patch.\n>\n> Here are review comments.\n>\n> + if (report_waiting && (!logged_recovery_conflict ||\n> new_status == NULL))\n> + ts = GetCurrentTimestamp();\n>\n> The condition will always be true if log_recovery_conflict_wait is\n> false and report_waiting is true, leading to unnecessary calling of\n> GetCurrentTimestamp().\n>\n> ---\n> + <para>\n> + You can control whether a log message is produced when the startup process\n> + is waiting longer than <varname>deadlock_timeout</varname> for recovery\n> + conflicts. This is controled by the <xref\n> linkend=\"guc-log-recovery-conflict-waits\"/>\n> + parameter.\n> + </para>\n>\n> s/controled/controlled/\n>\n> ---\n> if (report_waiting)\n> waitStart = GetCurrentTimestamp();\n>\n> Similarly, we have the above code but we don't need to call\n> GetCurrentTimestamp() if update_process_title is false, even if\n> report_waiting is true.\n>\n> I've attached the patch that fixes the above comments. It can be\n> applied on top of your v8 patch.\n\nThanks for the review and the associated fixes!\n\nI've attached a new version that contains your fixes.\n\nThanks\n\nBertrand",
"msg_date": "Mon, 16 Nov 2020 08:55:37 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 4:55 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 11/16/20 6:44 AM, Masahiko Sawada wrote:\n> > Thank you for updating the patch.\n> >\n> > Here are review comments.\n> >\n> > + if (report_waiting && (!logged_recovery_conflict ||\n> > new_status == NULL))\n> > + ts = GetCurrentTimestamp();\n> >\n> > The condition will always be true if log_recovery_conflict_wait is\n> > false and report_waiting is true, leading to unnecessary calling of\n> > GetCurrentTimestamp().\n> >\n> > ---\n> > + <para>\n> > + You can control whether a log message is produced when the startup process\n> > + is waiting longer than <varname>deadlock_timeout</varname> for recovery\n> > + conflicts. This is controled by the <xref\n> > linkend=\"guc-log-recovery-conflict-waits\"/>\n> > + parameter.\n> > + </para>\n> >\n> > s/controled/controlled/\n> >\n> > ---\n> > if (report_waiting)\n> > waitStart = GetCurrentTimestamp();\n> >\n> > Similarly, we have the above code but we don't need to call\n> > GetCurrentTimestamp() if update_process_title is false, even if\n> > report_waiting is true.\n> >\n> > I've attached the patch that fixes the above comments. It can be\n> > applied on top of your v8 patch.\n>\n> Thanks for the review and the associated fixes!\n>\n> I've attached a new version that contains your fixes.\n>\n\nThank you for updating the patch.\n\nI have other comments:\n\n+ <para>\n+ You can control whether a log message is produced when the startup process\n+ is waiting longer than <varname>deadlock_timeout</varname> for recovery\n+ conflicts. This is controlled by the\n+ <xref linkend=\"guc-log-recovery-conflict-waits\"/> parameter.\n+ </para>\n\nIt would be better to use 'WAL replay' instead of 'the startup\nprocess' for consistency with circumjacent descriptions. What do you\nthink?\n\n---\n@@ -1260,6 +1262,8 @@ ProcSleep(LOCALLOCK *locallock, LockMethod\nlockMethodTable)\n else\n enable_timeout_after(DEADLOCK_TIMEOUT, DeadlockTimeout);\n }\n+ else\n+ standbyWaitStart = GetCurrentTimestamp();\n\nI think we can add a check of log_recovery_conflict_waits to avoid\nunnecessary calling of GetCurrentTimestamp().\n\nI've attached the updated version patch including the above comments\nas well as adding some assertions. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 17 Nov 2020 10:09:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\nOn 11/17/20 2:09 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Mon, Nov 16, 2020 at 4:55 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi,\n>>\n>> On 11/16/20 6:44 AM, Masahiko Sawada wrote:\n>>> Thank you for updating the patch.\n>>>\n>>> Here are review comments.\n>>>\n>>> + if (report_waiting && (!logged_recovery_conflict ||\n>>> new_status == NULL))\n>>> + ts = GetCurrentTimestamp();\n>>>\n>>> The condition will always be true if log_recovery_conflict_wait is\n>>> false and report_waiting is true, leading to unnecessary calling of\n>>> GetCurrentTimestamp().\n>>>\n>>> ---\n>>> + <para>\n>>> + You can control whether a log message is produced when the startup process\n>>> + is waiting longer than <varname>deadlock_timeout</varname> for recovery\n>>> + conflicts. This is controled by the <xref\n>>> linkend=\"guc-log-recovery-conflict-waits\"/>\n>>> + parameter.\n>>> + </para>\n>>>\n>>> s/controled/controlled/\n>>>\n>>> ---\n>>> if (report_waiting)\n>>> waitStart = GetCurrentTimestamp();\n>>>\n>>> Similarly, we have the above code but we don't need to call\n>>> GetCurrentTimestamp() if update_process_title is false, even if\n>>> report_waiting is true.\n>>>\n>>> I've attached the patch that fixes the above comments. It can be\n>>> applied on top of your v8 patch.\n>> Thanks for the review and the associated fixes!\n>>\n>> I've attached a new version that contains your fixes.\n>>\n> Thank you for updating the patch.\n>\n> I have other comments:\n>\n> + <para>\n> + You can control whether a log message is produced when the startup process\n> + is waiting longer than <varname>deadlock_timeout</varname> for recovery\n> + conflicts. This is controlled by the\n> + <xref linkend=\"guc-log-recovery-conflict-waits\"/> parameter.\n> + </para>\n>\n> It would be better to use 'WAL replay' instead of 'the startup\n> process' for consistency with circumjacent descriptions. What do you\n> think?\n\nAgree that the wording is more appropriate.\n\n>\n> ---\n> @@ -1260,6 +1262,8 @@ ProcSleep(LOCALLOCK *locallock, LockMethod\n> lockMethodTable)\n> else\n> enable_timeout_after(DEADLOCK_TIMEOUT, DeadlockTimeout);\n> }\n> + else\n> + standbyWaitStart = GetCurrentTimestamp();\n>\n> I think we can add a check of log_recovery_conflict_waits to avoid\n> unnecessary calling of GetCurrentTimestamp().\n>\n> I've attached the updated version patch including the above comments\n> as well as adding some assertions. Please review it.\n>\nThat looks all good to me.\n\nThanks a lot for your precious help!\n\nBertrand\n\n\n\n",
"msg_date": "Tue, 17 Nov 2020 09:23:13 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/11/17 17:23, Drouvot, Bertrand wrote:\n> \n> On 11/17/20 2:09 AM, Masahiko Sawada wrote:\n>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>\n>>\n>>\n>> On Mon, Nov 16, 2020 at 4:55 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>> Hi,\n>>>\n>>> On 11/16/20 6:44 AM, Masahiko Sawada wrote:\n>>>> Thank you for updating the patch.\n>>>>\n>>>> Here are review comments.\n>>>>\n>>>> + if (report_waiting && (!logged_recovery_conflict ||\n>>>> new_status == NULL))\n>>>> + ts = GetCurrentTimestamp();\n>>>>\n>>>> The condition will always be true if log_recovery_conflict_wait is\n>>>> false and report_waiting is true, leading to unnecessary calling of\n>>>> GetCurrentTimestamp().\n>>>>\n>>>> ---\n>>>> + <para>\n>>>> + You can control whether a log message is produced when the startup process\n>>>> + is waiting longer than <varname>deadlock_timeout</varname> for recovery\n>>>> + conflicts. This is controled by the <xref\n>>>> linkend=\"guc-log-recovery-conflict-waits\"/>\n>>>> + parameter.\n>>>> + </para>\n>>>>\n>>>> s/controled/controlled/\n>>>>\n>>>> ---\n>>>> if (report_waiting)\n>>>> waitStart = GetCurrentTimestamp();\n>>>>\n>>>> Similarly, we have the above code but we don't need to call\n>>>> GetCurrentTimestamp() if update_process_title is false, even if\n>>>> report_waiting is true.\n>>>>\n>>>> I've attached the patch that fixes the above comments. It can be\n>>>> applied on top of your v8 patch.\n>>> Thanks for the review and the associated fixes!\n>>>\n>>> I've attached a new version that contains your fixes.\n>>>\n>> Thank you for updating the patch.\n>>\n>> I have other comments:\n>>\n>> + <para>\n>> + You can control whether a log message is produced when the startup process\n>> + is waiting longer than <varname>deadlock_timeout</varname> for recovery\n>> + conflicts. This is controlled by the\n>> + <xref linkend=\"guc-log-recovery-conflict-waits\"/> parameter.\n>> + </para>\n>>\n>> It would be better to use 'WAL replay' instead of 'the startup\n>> process' for consistency with circumjacent descriptions. What do you\n>> think?\n> \n> Agree that the wording is more appropriate.\n> \n>>\n>> ---\n>> @@ -1260,6 +1262,8 @@ ProcSleep(LOCALLOCK *locallock, LockMethod\n>> lockMethodTable)\n>> else\n>> enable_timeout_after(DEADLOCK_TIMEOUT, DeadlockTimeout);\n>> }\n>> + else\n>> + standbyWaitStart = GetCurrentTimestamp();\n>>\n>> I think we can add a check of log_recovery_conflict_waits to avoid\n>> unnecessary calling of GetCurrentTimestamp().\n>>\n>> I've attached the updated version patch including the above comments\n>> as well as adding some assertions. Please review it.\n>>\n> That looks all good to me.\n> \n> Thanks a lot for your precious help!\n\nThanks for updating the patch! Here are review comments.\n\n+ Controls whether a log message is produced when the startup process\n+ is waiting longer than <varname>deadlock_timeout</varname>\n+ for recovery conflicts.\n\nBut a log message can be produced also when the backend is waiting\nfor recovery conflict. Right? If yes, this description needs to be corrected.\n\n\n+ for recovery conflicts. This is useful in determining if recovery\n+ conflicts prevents the recovery from applying WAL.\n\n\"prevents\" should be \"prevent\"?\n\n\n+\tTimestampDifference(waitStart, GetCurrentTimestamp(), &secs, &usecs);\n+\tmsecs = secs * 1000 + usecs / 1000;\n\nGetCurrentTimestamp() is basically called before LogRecoveryConflict()\nis called. So isn't it better to avoid calling GetCurrentTimestamp() newly in\nLogRecoveryConflict() and to reuse the timestamp that we got?\nIt's helpful to avoid the waste of cycles.\n\n\n+\t\twhile (VirtualTransactionIdIsValid(*vxids))\n+\t\t{\n+\t\t\tPGPROC *proc = BackendIdGetProc(vxids->backendId);\n\nBackendIdGetProc() can return NULL if the backend is not active\nat that moment. This case should be handled.\n\n\n+\t\tcase PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n+\t\t\treasonDesc = gettext_noop(\"recovery is still waiting recovery conflict on buffer pin\");\n\nIt's natural to use \"waiting for recovery\" rather than \"waiting recovery\"?\n\n\n+\t\t/* Also, set deadlock timeout for logging purpose if necessary */\n+\t\tif (log_recovery_conflict_waits)\n+\t\t{\n+\t\t\ttimeouts[cnt].id = STANDBY_TIMEOUT;\n+\t\t\ttimeouts[cnt].type = TMPARAM_AFTER;\n+\t\t\ttimeouts[cnt].delay_ms = DeadlockTimeout;\n+\t\t\tcnt++;\n+\t\t}\n\nThis needs to be executed only when the message has not been logged yet.\nRight?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 18 Nov 2020 00:44:39 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 11/17/20 4:44 PM, Fujii Masao wrote:\n>\n> Thanks for updating the patch! Here are review comments.\n>\n> + Controls whether a log message is produced when the startup \n> process\n> + is waiting longer than <varname>deadlock_timeout</varname>\n> + for recovery conflicts.\n>\n> But a log message can be produced also when the backend is waiting\n> for recovery conflict. Right? If yes, this description needs to be \n> corrected.\n\nThanks for looking at it!\n\nI don't think so, only the startup process should write those new log \nmessages.\n\nWhat makes you think that would not be the case?\n\n>\n>\n> + for recovery conflicts. This is useful in determining if \n> recovery\n> + conflicts prevents the recovery from applying WAL.\n>\n> \"prevents\" should be \"prevent\"?\n\nIndeed: fixed in the new attached patch.\n\n>\n>\n> + TimestampDifference(waitStart, GetCurrentTimestamp(), &secs, \n> &usecs);\n> + msecs = secs * 1000 + usecs / 1000;\n>\n> GetCurrentTimestamp() is basically called before LogRecoveryConflict()\n> is called. So isn't it better to avoid calling GetCurrentTimestamp() \n> newly in\n> LogRecoveryConflict() and to reuse the timestamp that we got?\n> It's helpful to avoid the waste of cycles.\n>\ngood catch! fixed in the new attached patch.\n\n>\n> + while (VirtualTransactionIdIsValid(*vxids))\n> + {\n> + PGPROC *proc = \n> BackendIdGetProc(vxids->backendId);\n>\n> BackendIdGetProc() can return NULL if the backend is not active\n> at that moment. This case should be handled.\n>\nhandled in the new attached patch.\n>\n> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n> + reasonDesc = gettext_noop(\"recovery is still \n> waiting recovery conflict on buffer pin\");\n>\n> It's natural to use \"waiting for recovery\" rather than \"waiting \n> recovery\"?\n>\nI would be tempted to say so, the new patch makes use of \"waiting for\".\n>\n> + /* Also, set deadlock timeout for logging purpose if \n> necessary */\n> + if (log_recovery_conflict_waits)\n> + {\n> + timeouts[cnt].id = STANDBY_TIMEOUT;\n> + timeouts[cnt].type = TMPARAM_AFTER;\n> + timeouts[cnt].delay_ms = DeadlockTimeout;\n> + cnt++;\n> + }\n>\n> This needs to be executed only when the message has not been logged yet.\n> Right?\n>\ngood catch: fixed in the new attached patch.\n\nBertrand",
"msg_date": "Fri, 20 Nov 2020 10:17:39 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 6:18 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 11/17/20 4:44 PM, Fujii Masao wrote:\n> >\n> > Thanks for updating the patch! Here are review comments.\n> >\n> > + Controls whether a log message is produced when the startup\n> > process\n> > + is waiting longer than <varname>deadlock_timeout</varname>\n> > + for recovery conflicts.\n> >\n> > But a log message can be produced also when the backend is waiting\n> > for recovery conflict. Right? If yes, this description needs to be\n> > corrected.\n>\n> Thanks for looking at it!\n>\n> I don't think so, only the startup process should write those new log\n> messages.\n>\n> What makes you think that would not be the case?\n>\n> >\n> >\n> > + for recovery conflicts. This is useful in determining if\n> > recovery\n> > + conflicts prevents the recovery from applying WAL.\n> >\n> > \"prevents\" should be \"prevent\"?\n>\n> Indeed: fixed in the new attached patch.\n>\n> >\n> >\n> > + TimestampDifference(waitStart, GetCurrentTimestamp(), &secs,\n> > &usecs);\n> > + msecs = secs * 1000 + usecs / 1000;\n> >\n> > GetCurrentTimestamp() is basically called before LogRecoveryConflict()\n> > is called. So isn't it better to avoid calling GetCurrentTimestamp()\n> > newly in\n> > LogRecoveryConflict() and to reuse the timestamp that we got?\n> > It's helpful to avoid the waste of cycles.\n> >\n> good catch! fixed in the new attached patch.\n>\n> >\n> > + while (VirtualTransactionIdIsValid(*vxids))\n> > + {\n> > + PGPROC *proc =\n> > BackendIdGetProc(vxids->backendId);\n> >\n> > BackendIdGetProc() can return NULL if the backend is not active\n> > at that moment. This case should be handled.\n> >\n> handled in the new attached patch.\n> >\n> > + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n> > + reasonDesc = gettext_noop(\"recovery is still\n> > waiting recovery conflict on buffer pin\");\n> >\n> > It's natural to use \"waiting for recovery\" rather than \"waiting\n> > recovery\"?\n> >\n> I would be tempted to say so, the new patch makes use of \"waiting for\".\n> >\n> > + /* Also, set deadlock timeout for logging purpose if\n> > necessary */\n> > + if (log_recovery_conflict_waits)\n> > + {\n> > + timeouts[cnt].id = STANDBY_TIMEOUT;\n> > + timeouts[cnt].type = TMPARAM_AFTER;\n> > + timeouts[cnt].delay_ms = DeadlockTimeout;\n> > + cnt++;\n> > + }\n> >\n> > This needs to be executed only when the message has not been logged yet.\n> > Right?\n> >\n> good catch: fixed in the new attached patch.\n>\n\nThank you for updating the patch! Here are review comments on the\nlatest version patch.\n\n+ while (VirtualTransactionIdIsValid(*vxids))\n+ {\n+ PGPROC *proc = BackendIdGetProc(vxids->backendId);\n+\n+ if (proc)\n+ {\n+ if (nprocs == 0)\n+ appendStringInfo(&buf, \"%d\", proc->pid);\n+ else\n+ appendStringInfo(&buf, \", %d\", proc->pid);\n+\n+ nprocs++;\n+ vxids++;\n+ }\n+ }\n\nWe need to increment vxids even if *proc is null. Otherwise, the loop won't end.\n\n---\n+ TimestampTz cur_ts = GetCurrentTimestamp();;\n\nThere is an extra semi-colon.\n\n---\n int max_standby_streaming_delay = 30 * 1000;\n+bool log_recovery_conflict_waits = false;\n+bool logged_lock_conflict = false;\n\n\n+ if (log_recovery_conflict_waits && !logged_lock_conflict)\n+ {\n+ timeouts[cnt].id = STANDBY_TIMEOUT;\n+ timeouts[cnt].type = TMPARAM_AFTER;\n+ timeouts[cnt].delay_ms = DeadlockTimeout;\n+ cnt++;\n+ }\n\nCan we pass a bool indicating if a timeout may be needed for recovery\nconflict logging from ProcSleep() to ResolveRecoveryConflictWithLock()\ninstead of using a static variable?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 25 Nov 2020 22:20:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 11/25/20 2:20 PM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Fri, Nov 20, 2020 at 6:18 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi,\n>>\n>> On 11/17/20 4:44 PM, Fujii Masao wrote:\n>>> Thanks for updating the patch! Here are review comments.\n>>>\n>>> + Controls whether a log message is produced when the startup\n>>> process\n>>> + is waiting longer than <varname>deadlock_timeout</varname>\n>>> + for recovery conflicts.\n>>>\n>>> But a log message can be produced also when the backend is waiting\n>>> for recovery conflict. Right? If yes, this description needs to be\n>>> corrected.\n>> Thanks for looking at it!\n>>\n>> I don't think so, only the startup process should write those new log\n>> messages.\n>>\n>> What makes you think that would not be the case?\n>>\n>>>\n>>> + for recovery conflicts. This is useful in determining if\n>>> recovery\n>>> + conflicts prevents the recovery from applying WAL.\n>>>\n>>> \"prevents\" should be \"prevent\"?\n>> Indeed: fixed in the new attached patch.\n>>\n>>>\n>>> + TimestampDifference(waitStart, GetCurrentTimestamp(), &secs,\n>>> &usecs);\n>>> + msecs = secs * 1000 + usecs / 1000;\n>>>\n>>> GetCurrentTimestamp() is basically called before LogRecoveryConflict()\n>>> is called. So isn't it better to avoid calling GetCurrentTimestamp()\n>>> newly in\n>>> LogRecoveryConflict() and to reuse the timestamp that we got?\n>>> It's helpful to avoid the waste of cycles.\n>>>\n>> good catch! fixed in the new attached patch.\n>>\n>>> + while (VirtualTransactionIdIsValid(*vxids))\n>>> + {\n>>> + PGPROC *proc =\n>>> BackendIdGetProc(vxids->backendId);\n>>>\n>>> BackendIdGetProc() can return NULL if the backend is not active\n>>> at that moment. This case should be handled.\n>>>\n>> handled in the new attached patch.\n>>> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n>>> + reasonDesc = gettext_noop(\"recovery is still\n>>> waiting recovery conflict on buffer pin\");\n>>>\n>>> It's natural to use \"waiting for recovery\" rather than \"waiting\n>>> recovery\"?\n>>>\n>> I would be tempted to say so, the new patch makes use of \"waiting for\".\n>>> + /* Also, set deadlock timeout for logging purpose if\n>>> necessary */\n>>> + if (log_recovery_conflict_waits)\n>>> + {\n>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>> + cnt++;\n>>> + }\n>>>\n>>> This needs to be executed only when the message has not been logged yet.\n>>> Right?\n>>>\n>> good catch: fixed in the new attached patch.\n>>\n> Thank you for updating the patch! Here are review comments on the\n> latest version patch.\n>\n> + while (VirtualTransactionIdIsValid(*vxids))\n> + {\n> + PGPROC *proc = BackendIdGetProc(vxids->backendId);\n> +\n> + if (proc)\n> + {\n> + if (nprocs == 0)\n> + appendStringInfo(&buf, \"%d\", proc->pid);\n> + else\n> + appendStringInfo(&buf, \", %d\", proc->pid);\n> +\n> + nprocs++;\n> + vxids++;\n> + }\n> + }\n>\n> We need to increment vxids even if *proc is null. Otherwise, the loop won't end.\n\nMy bad, that's fixed.\n\n>\n> ---\n> + TimestampTz cur_ts = GetCurrentTimestamp();;\nFixed\n>\n> There is an extra semi-colon.\n>\n> ---\n> int max_standby_streaming_delay = 30 * 1000;\n> +bool log_recovery_conflict_waits = false;\n> +bool logged_lock_conflict = false;\n>\n>\n> + if (log_recovery_conflict_waits && !logged_lock_conflict)\n> + {\n> + timeouts[cnt].id = STANDBY_TIMEOUT;\n> + timeouts[cnt].type = TMPARAM_AFTER;\n> + timeouts[cnt].delay_ms = DeadlockTimeout;\n> + cnt++;\n> + }\n>\n> Can we pass a bool indicating if a timeout may be needed for recovery\n> conflict logging from ProcSleep() to ResolveRecoveryConflictWithLock()\n> instead of using a static variable?\n\nYeah that makes more sense, specially as we already have \nlogged_recovery_conflict at our disposal.\n\nNew patch version attached.\n\nThanks!\n\nBertrand",
"msg_date": "Wed, 25 Nov 2020 16:47:44 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 12:49 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 11/25/20 2:20 PM, Masahiko Sawada wrote:\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Fri, Nov 20, 2020 at 6:18 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hi,\n> >>\n> >> On 11/17/20 4:44 PM, Fujii Masao wrote:\n> >>> Thanks for updating the patch! Here are review comments.\n> >>>\n> >>> + Controls whether a log message is produced when the startup\n> >>> process\n> >>> + is waiting longer than <varname>deadlock_timeout</varname>\n> >>> + for recovery conflicts.\n> >>>\n> >>> But a log message can be produced also when the backend is waiting\n> >>> for recovery conflict. Right? If yes, this description needs to be\n> >>> corrected.\n> >> Thanks for looking at it!\n> >>\n> >> I don't think so, only the startup process should write those new log\n> >> messages.\n> >>\n> >> What makes you think that would not be the case?\n> >>\n> >>>\n> >>> + for recovery conflicts. This is useful in determining if\n> >>> recovery\n> >>> + conflicts prevents the recovery from applying WAL.\n> >>>\n> >>> \"prevents\" should be \"prevent\"?\n> >> Indeed: fixed in the new attached patch.\n> >>\n> >>>\n> >>> + TimestampDifference(waitStart, GetCurrentTimestamp(), &secs,\n> >>> &usecs);\n> >>> + msecs = secs * 1000 + usecs / 1000;\n> >>>\n> >>> GetCurrentTimestamp() is basically called before LogRecoveryConflict()\n> >>> is called. So isn't it better to avoid calling GetCurrentTimestamp()\n> >>> newly in\n> >>> LogRecoveryConflict() and to reuse the timestamp that we got?\n> >>> It's helpful to avoid the waste of cycles.\n> >>>\n> >> good catch! fixed in the new attached patch.\n> >>\n> >>> + while (VirtualTransactionIdIsValid(*vxids))\n> >>> + {\n> >>> + PGPROC *proc =\n> >>> BackendIdGetProc(vxids->backendId);\n> >>>\n> >>> BackendIdGetProc() can return NULL if the backend is not active\n> >>> at that moment. This case should be handled.\n> >>>\n> >> handled in the new attached patch.\n> >>> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n> >>> + reasonDesc = gettext_noop(\"recovery is still\n> >>> waiting recovery conflict on buffer pin\");\n> >>>\n> >>> It's natural to use \"waiting for recovery\" rather than \"waiting\n> >>> recovery\"?\n> >>>\n> >> I would be tempted to say so, the new patch makes use of \"waiting for\".\n> >>> + /* Also, set deadlock timeout for logging purpose if\n> >>> necessary */\n> >>> + if (log_recovery_conflict_waits)\n> >>> + {\n> >>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n> >>> + timeouts[cnt].type = TMPARAM_AFTER;\n> >>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n> >>> + cnt++;\n> >>> + }\n> >>>\n> >>> This needs to be executed only when the message has not been logged yet.\n> >>> Right?\n> >>>\n> >> good catch: fixed in the new attached patch.\n> >>\n> > Thank you for updating the patch! Here are review comments on the\n> > latest version patch.\n> >\n> > + while (VirtualTransactionIdIsValid(*vxids))\n> > + {\n> > + PGPROC *proc = BackendIdGetProc(vxids->backendId);\n> > +\n> > + if (proc)\n> > + {\n> > + if (nprocs == 0)\n> > + appendStringInfo(&buf, \"%d\", proc->pid);\n> > + else\n> > + appendStringInfo(&buf, \", %d\", proc->pid);\n> > +\n> > + nprocs++;\n> > + vxids++;\n> > + }\n> > + }\n> >\n> > We need to increment vxids even if *proc is null. Otherwise, the loop won't end.\n>\n> My bad, that's fixed.\n>\n> >\n> > ---\n> > + TimestampTz cur_ts = GetCurrentTimestamp();;\n> Fixed\n> >\n> > There is an extra semi-colon.\n> >\n> > ---\n> > int max_standby_streaming_delay = 30 * 1000;\n> > +bool log_recovery_conflict_waits = false;\n> > +bool logged_lock_conflict = false;\n> >\n> >\n> > + if (log_recovery_conflict_waits && !logged_lock_conflict)\n> > + {\n> > + timeouts[cnt].id = STANDBY_TIMEOUT;\n> > + timeouts[cnt].type = TMPARAM_AFTER;\n> > + timeouts[cnt].delay_ms = DeadlockTimeout;\n> > + cnt++;\n> > + }\n> >\n> > Can we pass a bool indicating if a timeout may be needed for recovery\n> > conflict logging from ProcSleep() to ResolveRecoveryConflictWithLock()\n> > instead of using a static variable?\n>\n> Yeah that makes more sense, specially as we already have\n> logged_recovery_conflict at our disposal.\n>\n> New patch version attached.\n>\n\nThank you for updating the patch! The patch works fine and looks good\nto me except for the following small comments:\n\n+/*\n+ * Log the recovery conflict.\n+ *\n+ * waitStart is the timestamp when the caller started to wait. This\nfunction also\n+ * reports the details about the conflicting process ids if *waitlist\nis not NULL.\n+ */\n+void\n+LogRecoveryConflict(ProcSignalReason reason, TimestampTz waitStart,\n+ TimestampTz cur_ts,\nVirtualTransactionId *waitlist)\n\nI think it's better to explain cur_ts as well in the function comment.\n\nRegarding the function arguments, 'waitStart' is camel case whereas\n'cur_ts' is snake case and 'waitlist' is using only lower cases. I\nthink we should unify them.\n\n---\n-ResolveRecoveryConflictWithLock(LOCKTAG locktag)\n+ResolveRecoveryConflictWithLock(LOCKTAG locktag, bool logged_recovery_conflict)\n\nThe function argument name 'logged_recovery_conflict' sounds a bit\nredundant to me as this function is used only for recovery conflict\nresolution. How about 'need_log' or something? Also it’s better to\nexplain it in the function comment.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 27 Nov 2020 14:04:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 11/27/20 6:04 AM, Masahiko Sawada wrote:\n> Thank you for updating the patch! The patch works fine and looks good\n> to me except for the following small comments:\n>\n> +/*\n> + * Log the recovery conflict.\n> + *\n> + * waitStart is the timestamp when the caller started to wait. This\n> function also\n> + * reports the details about the conflicting process ids if *waitlist\n> is not NULL.\n> + */\n> +void\n> +LogRecoveryConflict(ProcSignalReason reason, TimestampTz waitStart,\n> + TimestampTz cur_ts,\n> VirtualTransactionId *waitlist)\n>\n> I think it's better to explain cur_ts as well in the function comment.\n>\n> Regarding the function arguments, 'waitStart' is camel case whereas\n> 'cur_ts' is snake case and 'waitlist' is using only lower cases. I\n> think we should unify them.\n>\n> ---\n> -ResolveRecoveryConflictWithLock(LOCKTAG locktag)\n> +ResolveRecoveryConflictWithLock(LOCKTAG locktag, bool logged_recovery_conflict)\n>\n> The function argument name 'logged_recovery_conflict' sounds a bit\n> redundant to me as this function is used only for recovery conflict\n> resolution. How about 'need_log' or something? Also it’s better to\n> explain it in the function comment.\n\nThanks for reviewing!\n\nI have addressed your comments in the new attached version.\n\nThanks\n\nBertrand",
"msg_date": "Fri, 27 Nov 2020 10:07:40 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2020-Nov-27, Drouvot, Bertrand wrote:\n\n> +\t\tif (nprocs > 0)\n> +\t\t{\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errmsg(\"%s after %ld.%03d ms\",\n> +\t\t\t\t\t\t\tget_recovery_conflict_desc(reason), msecs, usecs),\n> +\t\t\t\t\t (errdetail_log_plural(\"Conflicting process: %s.\",\n> +\t\t\t\t\t\t\t\t\t\t \"Conflicting processes: %s.\",\n> +\t\t\t\t\t\t\t\t\t\t nprocs, buf.data))));\n> +\t\t}\n\n> +/* Return the description of recovery conflict */\n> +static const char *\n> +get_recovery_conflict_desc(ProcSignalReason reason)\n> +{\n> +\tconst char *reasonDesc = gettext_noop(\"unknown reason\");\n> +\n> +\tswitch (reason)\n> +\t{\n> +\t\tcase PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n> +\t\t\treasonDesc = gettext_noop(\"recovery is still waiting for recovery conflict on buffer pin\");\n> +\t\t\tbreak;\n\nThis doesn't work from a translation point of view. First, you're\nbuilding a sentence from parts, which is against policy. Second, you're\nnot actually invoking gettext to translate the string returned by\nget_recovery_conflict_desc.\n\nI think this needs to be rethought. To handle the first problem I\nsuggest to split the error message in two. One phrase is the complain\nthat recovery is waiting, and the other string is the reason for the\nwait. Separate both either by splitting with a :, or alternatively put\nthe other sentence in DETAIL. (The latter would require to mix with the\nlist of conflicting processes, which might be hard.)\n\nThe first idea would look like this:\n\nLOG: recovery still waiting after %ld.03d ms: for recovery conflict on buffer pin\nDETAIL: Conflicting processes: 1, 2, 3.\n\nTo achieve this, apart from editing the messages returned by\nget_recovery_conflict_desc, you'd need to ereport this way:\n\n ereport(LOG,\n errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n msecs, usecs, _(get_recovery_conflict_desc(reason))),\n errdetail_log_plural(\"Conflicting process: %s.\",\n \"Conflicting processes: %s.\",\n\n\n",
"msg_date": "Fri, 27 Nov 2020 10:40:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 11/27/20 2:40 PM, Alvaro Herrera wrote:\n> On 2020-Nov-27, Drouvot, Bertrand wrote:\n>\n>> + if (nprocs > 0)\n>> + {\n>> + ereport(LOG,\n>> + (errmsg(\"%s after %ld.%03d ms\",\n>> + get_recovery_conflict_desc(reason), msecs, usecs),\n>> + (errdetail_log_plural(\"Conflicting process: %s.\",\n>> + \"Conflicting processes: %s.\",\n>> + nprocs, buf.data))));\n>> + }\n>> +/* Return the description of recovery conflict */\n>> +static const char *\n>> +get_recovery_conflict_desc(ProcSignalReason reason)\n>> +{\n>> + const char *reasonDesc = gettext_noop(\"unknown reason\");\n>> +\n>> + switch (reason)\n>> + {\n>> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n>> + reasonDesc = gettext_noop(\"recovery is still waiting for recovery conflict on buffer pin\");\n>> + break;\n> This doesn't work from a translation point of view. First, you're\n> building a sentence from parts, which is against policy. Second, you're\n> not actually invoking gettext to translate the string returned by\n> get_recovery_conflict_desc.\n>\n> I think this needs to be rethought. To handle the first problem I\n> suggest to split the error message in two. One phrase is the complain\n> that recovery is waiting, and the other string is the reason for the\n> wait. Separate both either by splitting with a :, or alternatively put\n> the other sentence in DETAIL. (The latter would require to mix with the\n> list of conflicting processes, which might be hard.)\n>\n> The first idea would look like this:\n>\n> LOG: recovery still waiting after %ld.03d ms: for recovery conflict on buffer pin\n> DETAIL: Conflicting processes: 1, 2, 3.\n>\n> To achieve this, apart from editing the messages returned by\n> get_recovery_conflict_desc, you'd need to ereport this way:\n>\n> ereport(LOG,\n> errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> msecs, usecs, _(get_recovery_conflict_desc(reason))),\n> errdetail_log_plural(\"Conflicting process: %s.\",\n> \"Conflicting processes: %s.\",\n>\nThanks for your comments, I did not know that.\n\nI've attached a new version that takes your comments into account.\n\nThanks!\n\nBertrand",
"msg_date": "Sat, 28 Nov 2020 12:08:24 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi Bertrand,\n\nOn 2020-Nov-28, Drouvot, Bertrand wrote:\n\n> +\t\tif (nprocs > 0)\n> +\t\t{\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> +\t\t\t\t\t\t\tmsecs, usecs, _(get_recovery_conflict_desc(reason))),\n> +\t\t\t\t\t (errdetail_log_plural(\"Conflicting process: %s.\",\n> +\t\t\t\t\t\t\t\t\t\t \"Conflicting processes: %s.\",\n> +\t\t\t\t\t\t\t\t\t\t nprocs, buf.data))));\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> +\t\t\t\t\t\t\tmsecs, usecs, _(get_recovery_conflict_desc(reason)))));\n> +\t\t}\n> +\n> +\t\tpfree(buf.data);\n> +\t}\n> +\telse\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> +\t\t\t\t\t\tmsecs, usecs, _(get_recovery_conflict_desc(reason)))));\n> +}\n\nAnother trivial stylistic point is that you can collapse all these\nereport calls into one, with something like\n\n ereport(LOG,\n errmsg(\"recovery still waiting after ...\", opts),\n waitlist != NULL ? errdetail_log_plural(\"foo bar baz\", opts) : 0);\n\nwhere the \"waitlist\" has been constructed beforehand, or is set to NULL\nif there's no process list.\n\n> +\tswitch (reason)\n> +\t{\n> +\t\tcase PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n> +\t\t\treasonDesc = gettext_noop(\"for recovery conflict on buffer pin\");\n> +\t\t\tbreak;\n\nPure bikeshedding after discussing this with my pillow: I think I'd get\nrid of the initial \"for\" in these messages.\n\n\n",
"msg_date": "Sat, 28 Nov 2020 14:36:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi Alvaro,\n\nOn 11/28/20 6:36 PM, Alvaro Herrera wrote:\n> Hi Bertrand,\n>\n> On 2020-Nov-28, Drouvot, Bertrand wrote:\n>\n>> + if (nprocs > 0)\n>> + {\n>> + ereport(LOG,\n>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>> + msecs, usecs, _(get_recovery_conflict_desc(reason))),\n>> + (errdetail_log_plural(\"Conflicting process: %s.\",\n>> + \"Conflicting processes: %s.\",\n>> + nprocs, buf.data))));\n>> + }\n>> + else\n>> + {\n>> + ereport(LOG,\n>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n>> + }\n>> +\n>> + pfree(buf.data);\n>> + }\n>> + else\n>> + ereport(LOG,\n>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n>> +}\n> Another trivial stylistic point is that you can collapse all these\n> ereport calls into one, with something like\n>\n> ereport(LOG,\n> errmsg(\"recovery still waiting after ...\", opts),\n> waitlist != NULL ? errdetail_log_plural(\"foo bar baz\", opts) : 0);\n>\n> where the \"waitlist\" has been constructed beforehand, or is set to NULL\n> if there's no process list.\n\nNice!\n\n>\n>> + switch (reason)\n>> + {\n>> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n>> + reasonDesc = gettext_noop(\"for recovery conflict on buffer pin\");\n>> + break;\n> Pure bikeshedding after discussing this with my pillow: I think I'd get\n> rid of the initial \"for\" in these messages.\n\nboth comments implemented in the new attached version.\n\nThanks!\n\nBertrand",
"msg_date": "Sun, 29 Nov 2020 07:47:30 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 3:47 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi Alvaro,\n>\n> On 11/28/20 6:36 PM, Alvaro Herrera wrote:\n> > Hi Bertrand,\n> >\n> > On 2020-Nov-28, Drouvot, Bertrand wrote:\n> >\n> >> + if (nprocs > 0)\n> >> + {\n> >> + ereport(LOG,\n> >> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> >> + msecs, usecs, _(get_recovery_conflict_desc(reason))),\n> >> + (errdetail_log_plural(\"Conflicting process: %s.\",\n> >> + \"Conflicting processes: %s.\",\n> >> + nprocs, buf.data))));\n> >> + }\n> >> + else\n> >> + {\n> >> + ereport(LOG,\n> >> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> >> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n> >> + }\n> >> +\n> >> + pfree(buf.data);\n> >> + }\n> >> + else\n> >> + ereport(LOG,\n> >> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> >> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n> >> +}\n> > Another trivial stylistic point is that you can collapse all these\n> > ereport calls into one, with something like\n> >\n> > ereport(LOG,\n> > errmsg(\"recovery still waiting after ...\", opts),\n> > waitlist != NULL ? errdetail_log_plural(\"foo bar baz\", opts) : 0);\n> >\n> > where the \"waitlist\" has been constructed beforehand, or is set to NULL\n> > if there's no process list.\n>\n> Nice!\n>\n> >\n> >> + switch (reason)\n> >> + {\n> >> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n> >> + reasonDesc = gettext_noop(\"for recovery conflict on buffer pin\");\n> >> + break;\n> > Pure bikeshedding after discussing this with my pillow: I think I'd get\n> > rid of the initial \"for\" in these messages.\n>\n> both comments implemented in the new attached version.\n>\n\nThank you for updating the patch!\n\n+ /* Also, set deadlock timeout for logging purpose if necessary */\n+ if (log_recovery_conflict_waits && !need_log)\n+ {\n+ timeouts[cnt].id = STANDBY_TIMEOUT;\n+ timeouts[cnt].type = TMPARAM_AFTER;\n+ timeouts[cnt].delay_ms = DeadlockTimeout;\n+ cnt++;\n+ }\n\nYou changed to 'need_log' but this condition seems not correct. I\nthink we need to set this timeout when both log_recovery_conflict and\nneed_log is true.\n\nThe rest of the patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 30 Nov 2020 12:41:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 11/30/20 4:41 AM, Masahiko Sawada wrote:\n> On Sun, Nov 29, 2020 at 3:47 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi Alvaro,\n>>\n>> On 11/28/20 6:36 PM, Alvaro Herrera wrote:\n>>> Hi Bertrand,\n>>>\n>>> On 2020-Nov-28, Drouvot, Bertrand wrote:\n>>>\n>>>> + if (nprocs > 0)\n>>>> + {\n>>>> + ereport(LOG,\n>>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>>>> + msecs, usecs, _(get_recovery_conflict_desc(reason))),\n>>>> + (errdetail_log_plural(\"Conflicting process: %s.\",\n>>>> + \"Conflicting processes: %s.\",\n>>>> + nprocs, buf.data))));\n>>>> + }\n>>>> + else\n>>>> + {\n>>>> + ereport(LOG,\n>>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>>>> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n>>>> + }\n>>>> +\n>>>> + pfree(buf.data);\n>>>> + }\n>>>> + else\n>>>> + ereport(LOG,\n>>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>>>> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n>>>> +}\n>>> Another trivial stylistic point is that you can collapse all these\n>>> ereport calls into one, with something like\n>>>\n>>> ereport(LOG,\n>>> errmsg(\"recovery still waiting after ...\", opts),\n>>> waitlist != NULL ? errdetail_log_plural(\"foo bar baz\", opts) : 0);\n>>>\n>>> where the \"waitlist\" has been constructed beforehand, or is set to NULL\n>>> if there's no process list.\n>> Nice!\n>>\n>>>> + switch (reason)\n>>>> + {\n>>>> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n>>>> + reasonDesc = gettext_noop(\"for recovery conflict on buffer pin\");\n>>>> + break;\n>>> Pure bikeshedding after discussing this with my pillow: I think I'd get\n>>> rid of the initial \"for\" in these messages.\n>> both comments implemented in the new attached version.\n>>\n> Thank you for updating the patch!\n>\n> + /* Also, set deadlock timeout for logging purpose if necessary */\n> + if (log_recovery_conflict_waits && !need_log)\n> + {\n> + timeouts[cnt].id = STANDBY_TIMEOUT;\n> + timeouts[cnt].type = TMPARAM_AFTER;\n> + timeouts[cnt].delay_ms = DeadlockTimeout;\n> + cnt++;\n> + }\n>\n> You changed to 'need_log' but this condition seems not correct. I\n> think we need to set this timeout when both log_recovery_conflict and\n> need_log is true.\n\nNice catch!\n\nIn fact it behaves correctly, that's jut the 'need_log' name that is \nmiss leading: I renamed it to 'already_logged' in the new attached version.\n\n> The rest of the patch looks good to me.\n\nGreat!\n\nThanks\n\nBertrand",
"msg_date": "Mon, 30 Nov 2020 07:45:59 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 3:46 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 11/30/20 4:41 AM, Masahiko Sawada wrote:\n> > On Sun, Nov 29, 2020 at 3:47 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hi Alvaro,\n> >>\n> >> On 11/28/20 6:36 PM, Alvaro Herrera wrote:\n> >>> Hi Bertrand,\n> >>>\n> >>> On 2020-Nov-28, Drouvot, Bertrand wrote:\n> >>>\n> >>>> + if (nprocs > 0)\n> >>>> + {\n> >>>> + ereport(LOG,\n> >>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> >>>> + msecs, usecs, _(get_recovery_conflict_desc(reason))),\n> >>>> + (errdetail_log_plural(\"Conflicting process: %s.\",\n> >>>> + \"Conflicting processes: %s.\",\n> >>>> + nprocs, buf.data))));\n> >>>> + }\n> >>>> + else\n> >>>> + {\n> >>>> + ereport(LOG,\n> >>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> >>>> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n> >>>> + }\n> >>>> +\n> >>>> + pfree(buf.data);\n> >>>> + }\n> >>>> + else\n> >>>> + ereport(LOG,\n> >>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n> >>>> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n> >>>> +}\n> >>> Another trivial stylistic point is that you can collapse all these\n> >>> ereport calls into one, with something like\n> >>>\n> >>> ereport(LOG,\n> >>> errmsg(\"recovery still waiting after ...\", opts),\n> >>> waitlist != NULL ? errdetail_log_plural(\"foo bar baz\", opts) : 0);\n> >>>\n> >>> where the \"waitlist\" has been constructed beforehand, or is set to NULL\n> >>> if there's no process list.\n> >> Nice!\n> >>\n> >>>> + switch (reason)\n> >>>> + {\n> >>>> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n> >>>> + reasonDesc = gettext_noop(\"for recovery conflict on buffer pin\");\n> >>>> + break;\n> >>> Pure bikeshedding after discussing this with my pillow: I think I'd get\n> >>> rid of the initial \"for\" in these messages.\n> >> both comments implemented in the new attached version.\n> >>\n> > Thank you for updating the patch!\n> >\n> > + /* Also, set deadlock timeout for logging purpose if necessary */\n> > + if (log_recovery_conflict_waits && !need_log)\n> > + {\n> > + timeouts[cnt].id = STANDBY_TIMEOUT;\n> > + timeouts[cnt].type = TMPARAM_AFTER;\n> > + timeouts[cnt].delay_ms = DeadlockTimeout;\n> > + cnt++;\n> > + }\n> >\n> > You changed to 'need_log' but this condition seems not correct. I\n> > think we need to set this timeout when both log_recovery_conflict and\n> > need_log is true.\n>\n> Nice catch!\n>\n> In fact it behaves correctly, that's jut the 'need_log' name that is\n> miss leading: I renamed it to 'already_logged' in the new attached version.\n>\n\nThanks! I'd prefer 'need_log' because we can check it using the\naffirmative form in that condition, which would make the code more\nreadable a bit. But I'd like to leave it to committers. I've marked\nthis patch as \"Ready for Committer\".\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 30 Nov 2020 16:26:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/11/20 18:17, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 11/17/20 4:44 PM, Fujii Masao wrote:\n>>\n>> Thanks for updating the patch! Here are review comments.\n>>\n>> + Controls whether a log message is produced when the startup process\n>> + is waiting longer than <varname>deadlock_timeout</varname>\n>> + for recovery conflicts.\n>>\n>> But a log message can be produced also when the backend is waiting\n>> for recovery conflict. Right? If yes, this description needs to be corrected.\n> \n> Thanks for looking at it!\n> \n> I don't think so, only the startup process should write those new log messages.\n> \n> What makes you think that would not be the case?\n\nProbably my mis-underding of the patch did that. Sorry for noise..\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 1 Dec 2020 02:59:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/11/30 16:26, Masahiko Sawada wrote:\n> On Mon, Nov 30, 2020 at 3:46 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>\n>> Hi,\n>>\n>> On 11/30/20 4:41 AM, Masahiko Sawada wrote:\n>>> On Sun, Nov 29, 2020 at 3:47 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>> Hi Alvaro,\n>>>>\n>>>> On 11/28/20 6:36 PM, Alvaro Herrera wrote:\n>>>>> Hi Bertrand,\n>>>>>\n>>>>> On 2020-Nov-28, Drouvot, Bertrand wrote:\n>>>>>\n>>>>>> + if (nprocs > 0)\n>>>>>> + {\n>>>>>> + ereport(LOG,\n>>>>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>>>>>> + msecs, usecs, _(get_recovery_conflict_desc(reason))),\n>>>>>> + (errdetail_log_plural(\"Conflicting process: %s.\",\n>>>>>> + \"Conflicting processes: %s.\",\n>>>>>> + nprocs, buf.data))));\n>>>>>> + }\n>>>>>> + else\n>>>>>> + {\n>>>>>> + ereport(LOG,\n>>>>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>>>>>> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n>>>>>> + }\n>>>>>> +\n>>>>>> + pfree(buf.data);\n>>>>>> + }\n>>>>>> + else\n>>>>>> + ereport(LOG,\n>>>>>> + (errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n>>>>>> + msecs, usecs, _(get_recovery_conflict_desc(reason)))));\n>>>>>> +}\n>>>>> Another trivial stylistic point is that you can collapse all these\n>>>>> ereport calls into one, with something like\n>>>>>\n>>>>> ereport(LOG,\n>>>>> errmsg(\"recovery still waiting after ...\", opts),\n>>>>> waitlist != NULL ? errdetail_log_plural(\"foo bar baz\", opts) : 0);\n>>>>>\n>>>>> where the \"waitlist\" has been constructed beforehand, or is set to NULL\n>>>>> if there's no process list.\n>>>> Nice!\n>>>>\n>>>>>> + switch (reason)\n>>>>>> + {\n>>>>>> + case PROCSIG_RECOVERY_CONFLICT_BUFFERPIN:\n>>>>>> + reasonDesc = gettext_noop(\"for recovery conflict on buffer pin\");\n>>>>>> + break;\n>>>>> Pure bikeshedding after discussing this with my pillow: I think I'd get\n>>>>> rid of the initial \"for\" in these messages.\n>>>> both comments implemented in the new attached version.\n>>>>\n>>> Thank you for updating the patch!\n>>>\n>>> + /* Also, set deadlock timeout for logging purpose if necessary */\n>>> + if (log_recovery_conflict_waits && !need_log)\n>>> + {\n>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>> + cnt++;\n>>> + }\n>>>\n>>> You changed to 'need_log' but this condition seems not correct. I\n>>> think we need to set this timeout when both log_recovery_conflict and\n>>> need_log is true.\n>>\n>> Nice catch!\n>>\n>> In fact it behaves correctly, that's jut the 'need_log' name that is\n>> miss leading: I renamed it to 'already_logged' in the new attached version.\n>>\n> \n> Thanks! I'd prefer 'need_log' because we can check it using the\n> affirmative form in that condition, which would make the code more\n> readable a bit. But I'd like to leave it to committers. I've marked\n> this patch as \"Ready for Committer\".\n\nI'm still in the middle of the review, but please let me share\nmy current review comments.\n\n+\t/* Set wait start timestamp if logging is enabled */\n+\tif (log_recovery_conflict_waits)\n+\t\twaitStart = GetCurrentTimestamp();\n\nThis seems to cause even the primary server to call GetCurrentTimestamp()\nif logging is enabled. To avoid unnecessary GetCurrentTimestamp(),\nwe should add \"InHotStandby\" into the if-condition?\n\n+\tinitStringInfo(&buf);\n+\n+\tif (wait_list)\n\nIsn't it better to call initStringInfo() only when wait_list is not NULL?\nFor example, we can update the code so that it's executed when nprocs == 0.\n\n+\t\t\tif (proc)\n+\t\t\t{\n+\t\t\t\tif (nprocs == 0)\n+\t\t\t\t\tappendStringInfo(&buf, \"%d\", proc->pid);\n+\t\t\t\telse\n+\t\t\t\t\tappendStringInfo(&buf, \", %d\", proc->pid);\n+\n+\t\t\t\tnprocs++;\n\nWhat happens if all the backends in wait_list have gone? In other words,\nhow should we handle the case where nprocs == 0 (i.e., nprocs has not been\nincrmented at all)? This would very rarely happen, but can happen.\nIn this case, since buf.data is empty, at least there seems no need to log\nthe list of conflicting processes in detail message.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 1 Dec 2020 03:04:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2020-Dec-01, Fujii Masao wrote:\n\n> +\t\t\tif (proc)\n> +\t\t\t{\n> +\t\t\t\tif (nprocs == 0)\n> +\t\t\t\t\tappendStringInfo(&buf, \"%d\", proc->pid);\n> +\t\t\t\telse\n> +\t\t\t\t\tappendStringInfo(&buf, \", %d\", proc->pid);\n> +\n> +\t\t\t\tnprocs++;\n> \n> What happens if all the backends in wait_list have gone? In other words,\n> how should we handle the case where nprocs == 0 (i.e., nprocs has not been\n> incrmented at all)? This would very rarely happen, but can happen.\n> In this case, since buf.data is empty, at least there seems no need to log\n> the list of conflicting processes in detail message.\n\nYes, I noticed this too; this can be simplified by changing the\ncondition in the ereport() call to be \"nprocs > 0\" (rather than\nwait_list being null), otherwise not print the errdetail. (You could\ntest buf.data or buf.len instead, but that seems uglier to me.)\n\n\n",
"msg_date": "Mon, 30 Nov 2020 15:25:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Dec-01, Fujii Masao wrote:\n>\n> > + if (proc)\n> > + {\n> > + if (nprocs == 0)\n> > + appendStringInfo(&buf, \"%d\", proc->pid);\n> > + else\n> > + appendStringInfo(&buf, \", %d\", proc->pid);\n> > +\n> > + nprocs++;\n> >\n> > What happens if all the backends in wait_list have gone? In other words,\n> > how should we handle the case where nprocs == 0 (i.e., nprocs has not been\n> > incrmented at all)? This would very rarely happen, but can happen.\n> > In this case, since buf.data is empty, at least there seems no need to log\n> > the list of conflicting processes in detail message.\n>\n> Yes, I noticed this too; this can be simplified by changing the\n> condition in the ereport() call to be \"nprocs > 0\" (rather than\n> wait_list being null), otherwise not print the errdetail. (You could\n> test buf.data or buf.len instead, but that seems uglier to me.)\n\n+1\n\nMaybe we can also improve the comment of this function from:\n\n+ * This function also reports the details about the conflicting\n+ * process ids if *wait_list is not NULL.\n\nto \" This function also reports the details about the conflicting\nprocess ids if exist\" or something.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 1 Dec 2020 08:35:53 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 12/1/20 12:35 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> On 2020-Dec-01, Fujii Masao wrote:\n>>\n>>> + if (proc)\n>>> + {\n>>> + if (nprocs == 0)\n>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>> + else\n>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>> +\n>>> + nprocs++;\n>>>\n>>> What happens if all the backends in wait_list have gone? In other words,\n>>> how should we handle the case where nprocs == 0 (i.e., nprocs has not been\n>>> incrmented at all)? This would very rarely happen, but can happen.\n>>> In this case, since buf.data is empty, at least there seems no need to log\n>>> the list of conflicting processes in detail message.\n>> Yes, I noticed this too; this can be simplified by changing the\n>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>> wait_list being null), otherwise not print the errdetail. (You could\n>> test buf.data or buf.len instead, but that seems uglier to me.)\n> +1\n>\n> Maybe we can also improve the comment of this function from:\n>\n> + * This function also reports the details about the conflicting\n> + * process ids if *wait_list is not NULL.\n>\n> to \" This function also reports the details about the conflicting\n> process ids if exist\" or something.\n>\nThank you all for the review/remarks.\n\nThey have been addressed in the new attached patch version.\n\nThanks!\n\nBertrand",
"msg_date": "Tue, 1 Dec 2020 09:29:31 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>\n>>\n>>\n>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>\n>>>> + if (proc)\n>>>> + {\n>>>> + if (nprocs == 0)\n>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>> + else\n>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>> +\n>>>> + nprocs++;\n>>>>\n>>>> What happens if all the backends in wait_list have gone? In other words,\n>>>> how should we handle the case where nprocs == 0 (i.e., nprocs has not been\n>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>> In this case, since buf.data is empty, at least there seems no need to log\n>>>> the list of conflicting processes in detail message.\n>>> Yes, I noticed this too; this can be simplified by changing the\n>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>> wait_list being null), otherwise not print the errdetail. (You could\n>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>> +1\n>>\n>> Maybe we can also improve the comment of this function from:\n>>\n>> + * This function also reports the details about the conflicting\n>> + * process ids if *wait_list is not NULL.\n>>\n>> to \" This function also reports the details about the conflicting\n>> process ids if exist\" or something.\n>>\n> Thank you all for the review/remarks.\n> \n> They have been addressed in the new attached patch version.\n\nThanks for updating the patch! I read through the patch again\nand applied the following chages to it. Attached is the updated\nversion of the patch. Could you review this version? If there is\nno issue in it, I'm thinking to commit this version.\n\n+\t\t\tif (waitStart > 0 && !logged_recovery_conflict)\n+\t\t\t{\n+\t\t\t\tTimestampTz cur_ts = GetCurrentTimestamp();\n+\t\t\t\tif (TimestampDifferenceExceeds(waitStart, cur_ts,\n+\t\t\t\t\t\t\t\t\t\t DeadlockTimeout))\n\nOn the first time through, this is executed before we have started\nactually waiting. Which is a bit wasteful. So I changed LockBufferForCleanup()\nand ResolveRecoveryConflictWithVirtualXIDs() so that the code for logging\nthe recovery conflict is executed after the function to wait is executed.\n\n+\tereport(LOG,\n+\t\t\terrmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n+\t\t\t\t\tmsecs, usecs, _(get_recovery_conflict_desc(reason))),\n+\t\t\twait_list > 0 ? errdetail_log_plural(\"Conflicting process: %s.\",\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\"Conflicting processes: %s.\",\n+\t\t\t\t\t\t\t\t\t\t\t\t\tnprocs, buf.data) : 0);\n\nSeems \"wait_list > 0\" should be \"nprocs > 0\". So I changed the code that way.\n\n+\t\t\tif (waitStart > 0)\n \t\t\t{\n-\t\t\t\tconst char *old_status;\n\nI added \"(!logged_recovery_conflict || new_status == NULL)\" into\nthe above if-condition, to avoid executing again the code for logging\nafter PS title was updated and the recovery conflict was logged.\n\n+\t\t\ttimeouts[cnt].id = STANDBY_TIMEOUT;\n+\t\t\ttimeouts[cnt].type = TMPARAM_AFTER;\n+\t\t\ttimeouts[cnt].delay_ms = DeadlockTimeout;\n\nMaybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\nI changed the code that way.\n\n+\t\t\t\t\t/*\n+\t\t\t\t\t * Log the recovery conflict if there is still virtual transaction\n+\t\t\t\t\t * conflicting with the lock.\n+\t\t\t\t\t */\n+\t\t\t\t\tif (cnt > 0)\n+\t\t\t\t\t{\n+\t\t\t\t\t\tLogRecoveryConflict(PROCSIG_RECOVERY_CONFLICT_LOCK,\n+\t\t\t\t\t\t\t\t\t\t\tstandbyWaitStart, cur_ts, vxids);\n+\t\t\t\t\t\tlogged_recovery_conflict = true;\n+\t\t\t\t\t}\n\nI think that ProcSleep() should log the recovery conflict even if\nthere are no conflicting virtual transactions. Because the startup\nprocess there has already waited longer than deadlock_timeout,\nwhether conflicting virtual transactions are still running or not.\n\nAlso LogRecoveryConflict() logs the recovery conflict even if it\nfinds that there are no conflicting active backends. So the rule\nabout whether to log the conflict when there are no conflicting\nbackends should be made consistent between functions, I think.\nThought?\n\n\nAlso I added more comments.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 4 Dec 2020 02:53:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n> > Hi,\n> >\n> > On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >>\n> >>\n> >>\n> >> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>> On 2020-Dec-01, Fujii Masao wrote:\n> >>>\n> >>>> + if (proc)\n> >>>> + {\n> >>>> + if (nprocs == 0)\n> >>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n> >>>> + else\n> >>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n> >>>> +\n> >>>> + nprocs++;\n> >>>>\n> >>>> What happens if all the backends in wait_list have gone? In other words,\n> >>>> how should we handle the case where nprocs == 0 (i.e., nprocs has not been\n> >>>> incrmented at all)? This would very rarely happen, but can happen.\n> >>>> In this case, since buf.data is empty, at least there seems no need to log\n> >>>> the list of conflicting processes in detail message.\n> >>> Yes, I noticed this too; this can be simplified by changing the\n> >>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n> >>> wait_list being null), otherwise not print the errdetail. (You could\n> >>> test buf.data or buf.len instead, but that seems uglier to me.)\n> >> +1\n> >>\n> >> Maybe we can also improve the comment of this function from:\n> >>\n> >> + * This function also reports the details about the conflicting\n> >> + * process ids if *wait_list is not NULL.\n> >>\n> >> to \" This function also reports the details about the conflicting\n> >> process ids if exist\" or something.\n> >>\n> > Thank you all for the review/remarks.\n> >\n> > They have been addressed in the new attached patch version.\n>\n> Thanks for updating the patch! I read through the patch again\n> and applied the following chages to it. Attached is the updated\n> version of the patch. Could you review this version? If there is\n> no issue in it, I'm thinking to commit this version.\n\nThank you for updating the patch! I have one question.\n\n>\n> + timeouts[cnt].id = STANDBY_TIMEOUT;\n> + timeouts[cnt].type = TMPARAM_AFTER;\n> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>\n> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n> I changed the code that way.\n\nAs the comment of ResolveRecoveryConflictWithLock() says the\nfollowing, a deadlock is detected by the ordinary backend process:\n\n * Deadlocks involving the Startup process and an ordinary backend proces\n * will be detected by the deadlock detector within the ordinary backend.\n\nIf we use STANDBY_DEADLOCK_TIMEOUT,\nSendRecoveryConflictWithBufferPin() will be called after\nDeadlockTimeout passed, but I think it's not necessary for the startup\nprocess in this case. If we want to just wake up the startup process\nmaybe we can use STANDBY_TIMEOUT here?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 4 Dec 2020 09:28:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/12/04 9:28, Masahiko Sawada wrote:\n> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>> Hi,\n>>>\n>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>>\n>>>>\n>>>>\n>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>\n>>>>>> + if (proc)\n>>>>>> + {\n>>>>>> + if (nprocs == 0)\n>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>> + else\n>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>> +\n>>>>>> + nprocs++;\n>>>>>>\n>>>>>> What happens if all the backends in wait_list have gone? In other words,\n>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs has not been\n>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>>>> In this case, since buf.data is empty, at least there seems no need to log\n>>>>>> the list of conflicting processes in detail message.\n>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>> wait_list being null), otherwise not print the errdetail. (You could\n>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>> +1\n>>>>\n>>>> Maybe we can also improve the comment of this function from:\n>>>>\n>>>> + * This function also reports the details about the conflicting\n>>>> + * process ids if *wait_list is not NULL.\n>>>>\n>>>> to \" This function also reports the details about the conflicting\n>>>> process ids if exist\" or something.\n>>>>\n>>> Thank you all for the review/remarks.\n>>>\n>>> They have been addressed in the new attached patch version.\n>>\n>> Thanks for updating the patch! I read through the patch again\n>> and applied the following chages to it. Attached is the updated\n>> version of the patch. Could you review this version? If there is\n>> no issue in it, I'm thinking to commit this version.\n> \n> Thank you for updating the patch! I have one question.\n> \n>>\n>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>> + timeouts[cnt].type = TMPARAM_AFTER;\n>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>\n>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>> I changed the code that way.\n> \n> As the comment of ResolveRecoveryConflictWithLock() says the\n> following, a deadlock is detected by the ordinary backend process:\n> \n> * Deadlocks involving the Startup process and an ordinary backend proces\n> * will be detected by the deadlock detector within the ordinary backend.\n> \n> If we use STANDBY_DEADLOCK_TIMEOUT,\n> SendRecoveryConflictWithBufferPin() will be called after\n> DeadlockTimeout passed, but I think it's not necessary for the startup\n> process in this case.\n\nThanks for pointing this! You are right.\n\n\n> If we want to just wake up the startup process\n> maybe we can use STANDBY_TIMEOUT here?\n\nWhen STANDBY_TIMEOUT happens, a request to release conflicting buffer pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n\nOr, first of all, we don't need to enable the deadlock timer at all? Since what we'd like to do is to wake up after deadlock_timeout passes, we can do that by changing ProcWaitForSignal() so that it can accept the timeout and giving the deadlock_timeout to it. If we do this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from ResolveRecoveryConflictWithLock(). Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 4 Dec 2020 10:21:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 12/4/20 2:21 AM, Fujii Masao wrote:\n>\n> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao \n>> <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>> Hi,\n>>>>\n>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>> CAUTION: This email originated from outside of the organization. \n>>>>> Do not click links or open attachments unless you can confirm the \n>>>>> sender and know the content is safe.\n>>>>>\n>>>>>\n>>>>>\n>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera \n>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>\n>>>>>>> + if (proc)\n>>>>>>> + {\n>>>>>>> + if (nprocs == 0)\n>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>> + else\n>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>> +\n>>>>>>> + nprocs++;\n>>>>>>>\n>>>>>>> What happens if all the backends in wait_list have gone? In \n>>>>>>> other words,\n>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs \n>>>>>>> has not been\n>>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>>>>> In this case, since buf.data is empty, at least there seems no \n>>>>>>> need to log\n>>>>>>> the list of conflicting processes in detail message.\n>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>> wait_list being null), otherwise not print the errdetail. (You \n>>>>>> could\n>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>> +1\n>>>>>\n>>>>> Maybe we can also improve the comment of this function from:\n>>>>>\n>>>>> + * This function also reports the details about the conflicting\n>>>>> + * process ids if *wait_list is not NULL.\n>>>>>\n>>>>> to \" This function also reports the details about the conflicting\n>>>>> process ids if exist\" or something.\n>>>>>\n>>>> Thank you all for the review/remarks.\n>>>>\n>>>> They have been addressed in the new attached patch version.\n>>>\n>>> Thanks for updating the patch! I read through the patch again\n>>> and applied the following chages to it. Attached is the updated\n>>> version of the patch. Could you review this version? If there is\n>>> no issue in it, I'm thinking to commit this version.\n>>\n>> Thank you for updating the patch! I have one question.\n>>\n>>>\n>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>\n>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>> I changed the code that way.\n>>\n>> As the comment of ResolveRecoveryConflictWithLock() says the\n>> following, a deadlock is detected by the ordinary backend process:\n>>\n>> * Deadlocks involving the Startup process and an ordinary backend \n>> proces\n>> * will be detected by the deadlock detector within the ordinary \n>> backend.\n>>\n>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>> SendRecoveryConflictWithBufferPin() will be called after\n>> DeadlockTimeout passed, but I think it's not necessary for the startup\n>> process in this case.\n>\n> Thanks for pointing this! You are right.\n>\n>\n>> If we want to just wake up the startup process\n>> maybe we can use STANDBY_TIMEOUT here?\n>\nThanks for the patch updates! Except what we are still discussing below, \nit looks good to me.\n\n> When STANDBY_TIMEOUT happens, a request to release conflicting buffer \n> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n\nAgree\n\n>\n> Or, first of all, we don't need to enable the deadlock timer at all? \n> Since what we'd like to do is to wake up after deadlock_timeout \n> passes, we can do that by changing ProcWaitForSignal() so that it can \n> accept the timeout and giving the deadlock_timeout to it. If we do \n> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from \n> ResolveRecoveryConflictWithLock(). Thought?\n\nWhy not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it triggers \na call to StandbyLockTimeoutHandler() which does nothing, except waking \nup. That's what we want, right?)\n\nI've attached a new version that makes use of it (that's the only change \ncompare to Masao's updates).\n\nThanks\n\nBertrand",
"msg_date": "Fri, 4 Dec 2020 11:22:17 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 12/4/20 2:21 AM, Fujii Masao wrote:\n> >\n> > On 2020/12/04 9:28, Masahiko Sawada wrote:\n> >> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n> >> <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n> >>>> Hi,\n> >>>>\n> >>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n> >>>>> CAUTION: This email originated from outside of the organization.\n> >>>>> Do not click links or open attachments unless you can confirm the\n> >>>>> sender and know the content is safe.\n> >>>>>\n> >>>>>\n> >>>>>\n> >>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n> >>>>> <alvherre@alvh.no-ip.org> wrote:\n> >>>>>> On 2020-Dec-01, Fujii Masao wrote:\n> >>>>>>\n> >>>>>>> + if (proc)\n> >>>>>>> + {\n> >>>>>>> + if (nprocs == 0)\n> >>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n> >>>>>>> + else\n> >>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n> >>>>>>> +\n> >>>>>>> + nprocs++;\n> >>>>>>>\n> >>>>>>> What happens if all the backends in wait_list have gone? In\n> >>>>>>> other words,\n> >>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n> >>>>>>> has not been\n> >>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n> >>>>>>> In this case, since buf.data is empty, at least there seems no\n> >>>>>>> need to log\n> >>>>>>> the list of conflicting processes in detail message.\n> >>>>>> Yes, I noticed this too; this can be simplified by changing the\n> >>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n> >>>>>> wait_list being null), otherwise not print the errdetail. (You\n> >>>>>> could\n> >>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n> >>>>> +1\n> >>>>>\n> >>>>> Maybe we can also improve the comment of this function from:\n> >>>>>\n> >>>>> + * This function also reports the details about the conflicting\n> >>>>> + * process ids if *wait_list is not NULL.\n> >>>>>\n> >>>>> to \" This function also reports the details about the conflicting\n> >>>>> process ids if exist\" or something.\n> >>>>>\n> >>>> Thank you all for the review/remarks.\n> >>>>\n> >>>> They have been addressed in the new attached patch version.\n> >>>\n> >>> Thanks for updating the patch! I read through the patch again\n> >>> and applied the following chages to it. Attached is the updated\n> >>> version of the patch. Could you review this version? If there is\n> >>> no issue in it, I'm thinking to commit this version.\n> >>\n> >> Thank you for updating the patch! I have one question.\n> >>\n> >>>\n> >>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n> >>> + timeouts[cnt].type = TMPARAM_AFTER;\n> >>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n> >>>\n> >>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n> >>> I changed the code that way.\n> >>\n> >> As the comment of ResolveRecoveryConflictWithLock() says the\n> >> following, a deadlock is detected by the ordinary backend process:\n> >>\n> >> * Deadlocks involving the Startup process and an ordinary backend\n> >> proces\n> >> * will be detected by the deadlock detector within the ordinary\n> >> backend.\n> >>\n> >> If we use STANDBY_DEADLOCK_TIMEOUT,\n> >> SendRecoveryConflictWithBufferPin() will be called after\n> >> DeadlockTimeout passed, but I think it's not necessary for the startup\n> >> process in this case.\n> >\n> > Thanks for pointing this! You are right.\n> >\n> >\n> >> If we want to just wake up the startup process\n> >> maybe we can use STANDBY_TIMEOUT here?\n> >\n> Thanks for the patch updates! Except what we are still discussing below,\n> it looks good to me.\n>\n> > When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n> > pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n>\n> Agree\n>\n> >\n> > Or, first of all, we don't need to enable the deadlock timer at all?\n> > Since what we'd like to do is to wake up after deadlock_timeout\n> > passes, we can do that by changing ProcWaitForSignal() so that it can\n> > accept the timeout and giving the deadlock_timeout to it. If we do\n> > this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n> > ResolveRecoveryConflictWithLock(). Thought?\n\nWhere do we enable deadlock timeout in hot standby case? You meant to\nenable it in ProcWaitForSignal() or where we set a timer for not hot\nstandby case, in ProcSleep()?\n\n>\n> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it triggers\n> a call to StandbyLockTimeoutHandler() which does nothing, except waking\n> up. That's what we want, right?)\n\nRight, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\nprocess can wake up and do nothing. Thank you for pointing out.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 5 Dec 2020 12:38:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/12/05 12:38, Masahiko Sawada wrote:\n> On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>\n>> Hi,\n>>\n>> On 12/4/20 2:21 AM, Fujii Masao wrote:\n>>>\n>>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>>>> Hi,\n>>>>>>\n>>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>>>> CAUTION: This email originated from outside of the organization.\n>>>>>>> Do not click links or open attachments unless you can confirm the\n>>>>>>> sender and know the content is safe.\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n>>>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>>>\n>>>>>>>>> + if (proc)\n>>>>>>>>> + {\n>>>>>>>>> + if (nprocs == 0)\n>>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>>>> + else\n>>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>>>> +\n>>>>>>>>> + nprocs++;\n>>>>>>>>>\n>>>>>>>>> What happens if all the backends in wait_list have gone? In\n>>>>>>>>> other words,\n>>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n>>>>>>>>> has not been\n>>>>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>>>>>>> In this case, since buf.data is empty, at least there seems no\n>>>>>>>>> need to log\n>>>>>>>>> the list of conflicting processes in detail message.\n>>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n>>>>>>>> could\n>>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>>>> +1\n>>>>>>>\n>>>>>>> Maybe we can also improve the comment of this function from:\n>>>>>>>\n>>>>>>> + * This function also reports the details about the conflicting\n>>>>>>> + * process ids if *wait_list is not NULL.\n>>>>>>>\n>>>>>>> to \" This function also reports the details about the conflicting\n>>>>>>> process ids if exist\" or something.\n>>>>>>>\n>>>>>> Thank you all for the review/remarks.\n>>>>>>\n>>>>>> They have been addressed in the new attached patch version.\n>>>>>\n>>>>> Thanks for updating the patch! I read through the patch again\n>>>>> and applied the following chages to it. Attached is the updated\n>>>>> version of the patch. Could you review this version? If there is\n>>>>> no issue in it, I'm thinking to commit this version.\n>>>>\n>>>> Thank you for updating the patch! I have one question.\n>>>>\n>>>>>\n>>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>>>\n>>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>>>> I changed the code that way.\n>>>>\n>>>> As the comment of ResolveRecoveryConflictWithLock() says the\n>>>> following, a deadlock is detected by the ordinary backend process:\n>>>>\n>>>> * Deadlocks involving the Startup process and an ordinary backend\n>>>> proces\n>>>> * will be detected by the deadlock detector within the ordinary\n>>>> backend.\n>>>>\n>>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>>>> SendRecoveryConflictWithBufferPin() will be called after\n>>>> DeadlockTimeout passed, but I think it's not necessary for the startup\n>>>> process in this case.\n>>>\n>>> Thanks for pointing this! You are right.\n>>>\n>>>\n>>>> If we want to just wake up the startup process\n>>>> maybe we can use STANDBY_TIMEOUT here?\n>>>\n>> Thanks for the patch updates! Except what we are still discussing below,\n>> it looks good to me.\n>>\n>>> When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n>>> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n>>\n>> Agree\n>>\n>>>\n>>> Or, first of all, we don't need to enable the deadlock timer at all?\n>>> Since what we'd like to do is to wake up after deadlock_timeout\n>>> passes, we can do that by changing ProcWaitForSignal() so that it can\n>>> accept the timeout and giving the deadlock_timeout to it. If we do\n>>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n>>> ResolveRecoveryConflictWithLock(). Thought?\n> \n> Where do we enable deadlock timeout in hot standby case? You meant to\n> enable it in ProcWaitForSignal() or where we set a timer for not hot\n> standby case, in ProcSleep()?\n\nNo, what I tried to say is to change ResolveRecoveryConflictWithLock() so that it does\n\n1. calculate the \"minimum\" timeout from deadlock_timeout and max_standby_xxx_delay\n2. give the calculated timeout value to ProcWaitForSignal()\n3. wait for signal and timeout on ProcWaitForSignal()\n\nSince ProcWaitForSignal() calls WaitLatch(), seems it's not so difficult to make ProcWaitForSignal() handle the timeout. If we do this, I was thinking that we can get rid of enable_timeouts() from ResolveRecoveryConflictWithLock().\n\n\n> \n>>\n>> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it triggers\n>> a call to StandbyLockTimeoutHandler() which does nothing, except waking\n>> up. That's what we want, right?)\n> \n> Right, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\n> process can wake up and do nothing. Thank you for pointing out.\n\nOkay, understood! Firstly I was thinking that enabling the same type (i.e., STANDBY_LOCK_TIMEOUT) of lock twice doesn't work properly, but as far as I read the code, it works. In that case, only the shorter timeout would be activated in enable_timeouts(). So I agree to use STANDBY_LOCK_TIMEOUT.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 14 Dec 2020 21:31:26 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2020/12/14 21:31, Fujii Masao wrote:\n> \n> \n> On 2020/12/05 12:38, Masahiko Sawada wrote:\n>> On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>\n>>> Hi,\n>>>\n>>> On 12/4/20 2:21 AM, Fujii Masao wrote:\n>>>>\n>>>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>>>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>>>>> Hi,\n>>>>>>>\n>>>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>>>>> CAUTION: This email originated from outside of the organization.\n>>>>>>>> Do not click links or open attachments unless you can confirm the\n>>>>>>>> sender and know the content is safe.\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n>>>>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>>>>\n>>>>>>>>>> + if (proc)\n>>>>>>>>>> + {\n>>>>>>>>>> + if (nprocs == 0)\n>>>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>>>>> + else\n>>>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>>>>> +\n>>>>>>>>>> + nprocs++;\n>>>>>>>>>>\n>>>>>>>>>> What happens if all the backends in wait_list have gone? In\n>>>>>>>>>> other words,\n>>>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n>>>>>>>>>> has not been\n>>>>>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>>>>>>>> In this case, since buf.data is empty, at least there seems no\n>>>>>>>>>> need to log\n>>>>>>>>>> the list of conflicting processes in detail message.\n>>>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n>>>>>>>>> could\n>>>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>>>>> +1\n>>>>>>>>\n>>>>>>>> Maybe we can also improve the comment of this function from:\n>>>>>>>>\n>>>>>>>> + * This function also reports the details about the conflicting\n>>>>>>>> + * process ids if *wait_list is not NULL.\n>>>>>>>>\n>>>>>>>> to \" This function also reports the details about the conflicting\n>>>>>>>> process ids if exist\" or something.\n>>>>>>>>\n>>>>>>> Thank you all for the review/remarks.\n>>>>>>>\n>>>>>>> They have been addressed in the new attached patch version.\n>>>>>>\n>>>>>> Thanks for updating the patch! I read through the patch again\n>>>>>> and applied the following chages to it. Attached is the updated\n>>>>>> version of the patch. Could you review this version? If there is\n>>>>>> no issue in it, I'm thinking to commit this version.\n>>>>>\n>>>>> Thank you for updating the patch! I have one question.\n>>>>>\n>>>>>>\n>>>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>>>>\n>>>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>>>>> I changed the code that way.\n>>>>>\n>>>>> As the comment of ResolveRecoveryConflictWithLock() says the\n>>>>> following, a deadlock is detected by the ordinary backend process:\n>>>>>\n>>>>> * Deadlocks involving the Startup process and an ordinary backend\n>>>>> proces\n>>>>> * will be detected by the deadlock detector within the ordinary\n>>>>> backend.\n>>>>>\n>>>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>>>>> SendRecoveryConflictWithBufferPin() will be called after\n>>>>> DeadlockTimeout passed, but I think it's not necessary for the startup\n>>>>> process in this case.\n>>>>\n>>>> Thanks for pointing this! You are right.\n>>>>\n>>>>\n>>>>> If we want to just wake up the startup process\n>>>>> maybe we can use STANDBY_TIMEOUT here?\n>>>>\n>>> Thanks for the patch updates! Except what we are still discussing below,\n>>> it looks good to me.\n>>>\n>>>> When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n>>>> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n>>>\n>>> Agree\n>>>\n>>>>\n>>>> Or, first of all, we don't need to enable the deadlock timer at all?\n>>>> Since what we'd like to do is to wake up after deadlock_timeout\n>>>> passes, we can do that by changing ProcWaitForSignal() so that it can\n>>>> accept the timeout and giving the deadlock_timeout to it. If we do\n>>>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n>>>> ResolveRecoveryConflictWithLock(). Thought?\n>>\n>> Where do we enable deadlock timeout in hot standby case? You meant to\n>> enable it in ProcWaitForSignal() or where we set a timer for not hot\n>> standby case, in ProcSleep()?\n> \n> No, what I tried to say is to change ResolveRecoveryConflictWithLock() so that it does\n> \n> 1. calculate the \"minimum\" timeout from deadlock_timeout and max_standby_xxx_delay\n> 2. give the calculated timeout value to ProcWaitForSignal()\n> 3. wait for signal and timeout on ProcWaitForSignal()\n> \n> Since ProcWaitForSignal() calls WaitLatch(), seems it's not so difficult to make ProcWaitForSignal() handle the timeout. If we do this, I was thinking that we can get rid of enable_timeouts() from ResolveRecoveryConflictWithLock().\n> \n> \n>>\n>>>\n>>> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it triggers\n>>> a call to StandbyLockTimeoutHandler() which does nothing, except waking\n>>> up. That's what we want, right?)\n>>\n>> Right, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\n>> process can wake up and do nothing. Thank you for pointing out.\n> \n> Okay, understood! Firstly I was thinking that enabling the same type (i.e., STANDBY_LOCK_TIMEOUT) of lock twice doesn't work properly, but as far as I read the code, it works. In that case, only the shorter timeout would be activated in enable_timeouts(). So I agree to use STANDBY_LOCK_TIMEOUT.\n\nSo I renamed the argument \"deadlock_timer\" in ResolveRecoveryConflictWithLock()\nbecause it's not the timer for deadlock and is confusing. Attached is the\nupdated version of the patch. Barring any objection, I will commit this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 15 Dec 2020 00:20:32 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 12/14/20 4:20 PM, Fujii Masao wrote:\n> CAUTION: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n>\n> On 2020/12/14 21:31, Fujii Masao wrote:\n>>\n>>\n>> On 2020/12/05 12:38, Masahiko Sawada wrote:\n>>> On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand \n>>> <bdrouvot@amazon.com> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> On 12/4/20 2:21 AM, Fujii Masao wrote:\n>>>>>\n>>>>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>>>>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n>>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>>>>>> Hi,\n>>>>>>>>\n>>>>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>>>>>> CAUTION: This email originated from outside of the organization.\n>>>>>>>>> Do not click links or open attachments unless you can confirm the\n>>>>>>>>> sender and know the content is safe.\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n>>>>>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>>>>>\n>>>>>>>>>>> + if (proc)\n>>>>>>>>>>> + {\n>>>>>>>>>>> + if (nprocs == 0)\n>>>>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>>>>>> + else\n>>>>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>>>>>> +\n>>>>>>>>>>> + nprocs++;\n>>>>>>>>>>>\n>>>>>>>>>>> What happens if all the backends in wait_list have gone? In\n>>>>>>>>>>> other words,\n>>>>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n>>>>>>>>>>> has not been\n>>>>>>>>>>> incrmented at all)? This would very rarely happen, but can \n>>>>>>>>>>> happen.\n>>>>>>>>>>> In this case, since buf.data is empty, at least there seems no\n>>>>>>>>>>> need to log\n>>>>>>>>>>> the list of conflicting processes in detail message.\n>>>>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n>>>>>>>>>> could\n>>>>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>>>>>> +1\n>>>>>>>>>\n>>>>>>>>> Maybe we can also improve the comment of this function from:\n>>>>>>>>>\n>>>>>>>>> + * This function also reports the details about the conflicting\n>>>>>>>>> + * process ids if *wait_list is not NULL.\n>>>>>>>>>\n>>>>>>>>> to \" This function also reports the details about the conflicting\n>>>>>>>>> process ids if exist\" or something.\n>>>>>>>>>\n>>>>>>>> Thank you all for the review/remarks.\n>>>>>>>>\n>>>>>>>> They have been addressed in the new attached patch version.\n>>>>>>>\n>>>>>>> Thanks for updating the patch! I read through the patch again\n>>>>>>> and applied the following chages to it. Attached is the updated\n>>>>>>> version of the patch. Could you review this version? If there is\n>>>>>>> no issue in it, I'm thinking to commit this version.\n>>>>>>\n>>>>>> Thank you for updating the patch! I have one question.\n>>>>>>\n>>>>>>>\n>>>>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>>>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>>>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>>>>>\n>>>>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>>>>>> I changed the code that way.\n>>>>>>\n>>>>>> As the comment of ResolveRecoveryConflictWithLock() says the\n>>>>>> following, a deadlock is detected by the ordinary backend process:\n>>>>>>\n>>>>>> * Deadlocks involving the Startup process and an ordinary backend\n>>>>>> proces\n>>>>>> * will be detected by the deadlock detector within the ordinary\n>>>>>> backend.\n>>>>>>\n>>>>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>>>>>> SendRecoveryConflictWithBufferPin() will be called after\n>>>>>> DeadlockTimeout passed, but I think it's not necessary for the \n>>>>>> startup\n>>>>>> process in this case.\n>>>>>\n>>>>> Thanks for pointing this! You are right.\n>>>>>\n>>>>>\n>>>>>> If we want to just wake up the startup process\n>>>>>> maybe we can use STANDBY_TIMEOUT here?\n>>>>>\n>>>> Thanks for the patch updates! Except what we are still discussing \n>>>> below,\n>>>> it looks good to me.\n>>>>\n>>>>> When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n>>>>> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT \n>>>>> there?\n>>>>\n>>>> Agree\n>>>>\n>>>>>\n>>>>> Or, first of all, we don't need to enable the deadlock timer at all?\n>>>>> Since what we'd like to do is to wake up after deadlock_timeout\n>>>>> passes, we can do that by changing ProcWaitForSignal() so that it can\n>>>>> accept the timeout and giving the deadlock_timeout to it. If we do\n>>>>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n>>>>> ResolveRecoveryConflictWithLock(). Thought?\n>>>\n>>> Where do we enable deadlock timeout in hot standby case? You meant to\n>>> enable it in ProcWaitForSignal() or where we set a timer for not hot\n>>> standby case, in ProcSleep()?\n>>\n>> No, what I tried to say is to change \n>> ResolveRecoveryConflictWithLock() so that it does\n>>\n>> 1. calculate the \"minimum\" timeout from deadlock_timeout and \n>> max_standby_xxx_delay\n>> 2. give the calculated timeout value to ProcWaitForSignal()\n>> 3. wait for signal and timeout on ProcWaitForSignal()\n>>\n>> Since ProcWaitForSignal() calls WaitLatch(), seems it's not so \n>> difficult to make ProcWaitForSignal() handle the timeout. If we do \n>> this, I was thinking that we can get rid of enable_timeouts() from \n>> ResolveRecoveryConflictWithLock().\n>>\n>>\n>>>\n>>>>\n>>>> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it \n>>>> triggers\n>>>> a call to StandbyLockTimeoutHandler() which does nothing, except \n>>>> waking\n>>>> up. That's what we want, right?)\n>>>\n>>> Right, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\n>>> process can wake up and do nothing. Thank you for pointing out.\n>>\n>> Okay, understood! Firstly I was thinking that enabling the same type \n>> (i.e., STANDBY_LOCK_TIMEOUT) of lock twice doesn't work properly, but \n>> as far as I read the code, it works. In that case, only the shorter \n>> timeout would be activated in enable_timeouts(). So I agree to use \n>> STANDBY_LOCK_TIMEOUT.\n>\n> So I renamed the argument \"deadlock_timer\" in \n> ResolveRecoveryConflictWithLock()\n> because it's not the timer for deadlock and is confusing. Attached is the\n> updated version of the patch. Barring any objection, I will commit \n> this version.\n\nThanks for the update!\n\nIndeed the naming is more appropriate and less confusing that way, this \nversion looks all good to me.\n\nThanks!\n\nBertrand\n\n\n\n",
"msg_date": "Mon, 14 Dec 2020 16:49:16 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2020/12/15 0:49, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 12/14/20 4:20 PM, Fujii Masao wrote:\n>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>\n>>\n>>\n>> On 2020/12/14 21:31, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/12/05 12:38, Masahiko Sawada wrote:\n>>>> On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> On 12/4/20 2:21 AM, Fujii Masao wrote:\n>>>>>>\n>>>>>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>>>>>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n>>>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>>>>>>> Hi,\n>>>>>>>>>\n>>>>>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>>>>>>> CAUTION: This email originated from outside of the organization.\n>>>>>>>>>> Do not click links or open attachments unless you can confirm the\n>>>>>>>>>> sender and know the content is safe.\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n>>>>>>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>>>>>>\n>>>>>>>>>>>> + if (proc)\n>>>>>>>>>>>> + {\n>>>>>>>>>>>> + if (nprocs == 0)\n>>>>>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>>>>>>> + else\n>>>>>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>>>>>>> +\n>>>>>>>>>>>> + nprocs++;\n>>>>>>>>>>>>\n>>>>>>>>>>>> What happens if all the backends in wait_list have gone? In\n>>>>>>>>>>>> other words,\n>>>>>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n>>>>>>>>>>>> has not been\n>>>>>>>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>>>>>>>>>> In this case, since buf.data is empty, at least there seems no\n>>>>>>>>>>>> need to log\n>>>>>>>>>>>> the list of conflicting processes in detail message.\n>>>>>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n>>>>>>>>>>> could\n>>>>>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>>>>>>> +1\n>>>>>>>>>>\n>>>>>>>>>> Maybe we can also improve the comment of this function from:\n>>>>>>>>>>\n>>>>>>>>>> + * This function also reports the details about the conflicting\n>>>>>>>>>> + * process ids if *wait_list is not NULL.\n>>>>>>>>>>\n>>>>>>>>>> to \" This function also reports the details about the conflicting\n>>>>>>>>>> process ids if exist\" or something.\n>>>>>>>>>>\n>>>>>>>>> Thank you all for the review/remarks.\n>>>>>>>>>\n>>>>>>>>> They have been addressed in the new attached patch version.\n>>>>>>>>\n>>>>>>>> Thanks for updating the patch! I read through the patch again\n>>>>>>>> and applied the following chages to it. Attached is the updated\n>>>>>>>> version of the patch. Could you review this version? If there is\n>>>>>>>> no issue in it, I'm thinking to commit this version.\n>>>>>>>\n>>>>>>> Thank you for updating the patch! I have one question.\n>>>>>>>\n>>>>>>>>\n>>>>>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>>>>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>>>>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>>>>>>\n>>>>>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>>>>>>> I changed the code that way.\n>>>>>>>\n>>>>>>> As the comment of ResolveRecoveryConflictWithLock() says the\n>>>>>>> following, a deadlock is detected by the ordinary backend process:\n>>>>>>>\n>>>>>>> * Deadlocks involving the Startup process and an ordinary backend\n>>>>>>> proces\n>>>>>>> * will be detected by the deadlock detector within the ordinary\n>>>>>>> backend.\n>>>>>>>\n>>>>>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>>>>>>> SendRecoveryConflictWithBufferPin() will be called after\n>>>>>>> DeadlockTimeout passed, but I think it's not necessary for the startup\n>>>>>>> process in this case.\n>>>>>>\n>>>>>> Thanks for pointing this! You are right.\n>>>>>>\n>>>>>>\n>>>>>>> If we want to just wake up the startup process\n>>>>>>> maybe we can use STANDBY_TIMEOUT here?\n>>>>>>\n>>>>> Thanks for the patch updates! Except what we are still discussing below,\n>>>>> it looks good to me.\n>>>>>\n>>>>>> When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n>>>>>> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n>>>>>\n>>>>> Agree\n>>>>>\n>>>>>>\n>>>>>> Or, first of all, we don't need to enable the deadlock timer at all?\n>>>>>> Since what we'd like to do is to wake up after deadlock_timeout\n>>>>>> passes, we can do that by changing ProcWaitForSignal() so that it can\n>>>>>> accept the timeout and giving the deadlock_timeout to it. If we do\n>>>>>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n>>>>>> ResolveRecoveryConflictWithLock(). Thought?\n>>>>\n>>>> Where do we enable deadlock timeout in hot standby case? You meant to\n>>>> enable it in ProcWaitForSignal() or where we set a timer for not hot\n>>>> standby case, in ProcSleep()?\n>>>\n>>> No, what I tried to say is to change ResolveRecoveryConflictWithLock() so that it does\n>>>\n>>> 1. calculate the \"minimum\" timeout from deadlock_timeout and max_standby_xxx_delay\n>>> 2. give the calculated timeout value to ProcWaitForSignal()\n>>> 3. wait for signal and timeout on ProcWaitForSignal()\n>>>\n>>> Since ProcWaitForSignal() calls WaitLatch(), seems it's not so difficult to make ProcWaitForSignal() handle the timeout. If we do this, I was thinking that we can get rid of enable_timeouts() from ResolveRecoveryConflictWithLock().\n>>>\n>>>\n>>>>\n>>>>>\n>>>>> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it triggers\n>>>>> a call to StandbyLockTimeoutHandler() which does nothing, except waking\n>>>>> up. That's what we want, right?)\n>>>>\n>>>> Right, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\n>>>> process can wake up and do nothing. Thank you for pointing out.\n>>>\n>>> Okay, understood! Firstly I was thinking that enabling the same type (i.e., STANDBY_LOCK_TIMEOUT) of lock twice doesn't work properly, but as far as I read the code, it works. In that case, only the shorter timeout would be activated in enable_timeouts(). So I agree to use STANDBY_LOCK_TIMEOUT.\n>>\n>> So I renamed the argument \"deadlock_timer\" in ResolveRecoveryConflictWithLock()\n>> because it's not the timer for deadlock and is confusing. Attached is the\n>> updated version of the patch. Barring any objection, I will commit this version.\n> \n> Thanks for the update!\n> \n> Indeed the naming is more appropriate and less confusing that way, this version looks all good to me.\n\nThanks for the review! I'm thinking to wait half a day before commiting\nthe patch just in the case someone may object the patch.\n\nBTW, attached is the POC patch that implements the idea discussed upthread;\nif log_recovery_conflict_waits is enabled, the startup process reports\nthe log also after the recovery conflict was resolved and the startup process\nfinished waiting for it. This patch needs to be applied after\nv11-0002-Log-the-standby-recovery-conflict-waits.patch is applied.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 15 Dec 2020 02:00:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "At Tue, 15 Dec 2020 02:00:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Thanks for the review! I'm thinking to wait half a day before\n> commiting\n> the patch just in the case someone may object the patch.\n\nSorry for coming late. I have looked only the latest thread so I\nshould be missing many things so please ignore if it was settled in\nthe past discussion.\n\n\nIt emits messages like the follows;\n\n[40509:startup] LOG: recovery still waiting after 1021.431 ms: recovery conflict on lock\n[40509:startup] DETAIL: Conflicting processes: 41171, 41194.\n[40509:startup] CONTEXT: WAL redo at 0/3013158 for Standby/LOCK: xid 510 db 13609 rel 16384 \n\nIFAIK DETAIL usually shows ordinary sentences so the first word is\ncapitalized and ends with a period. But it is not a sentence so\nfollowing period looks odd. a searcheing the tree for errdetails\nshowed some anomalies.\n\nsrc/backend/parser/parse_param.c \t\t\t\t\t errdetail(\"%s versus %s\",\nsrc/backend/jit/llvm/llvmjit_error.cpp \t\t\t errdetail(\"while in LLVM\")));\nsrc/backend/replication/logical/tablesync.c \t\t\t\t errdetail(\"The error was: %s\", res->err)));\nsrc/backend/tcop/postgres.c \t\t\t\terrdetail(\"prepare: %s\", pstmt->plansource->query_string);\nsrc/backend/tcop/postgres.c \t\terrdetail(\"abort reason: recovery conflict\");\n\nand one similar occurance:\n\nsrc/backend/utils/adt/dbsize.c \t\t\t\t\t errdetail(\"Invalid size unit: \\\"%s\\\".\", strptr),\n\nI'm not sure, but it seems to me at least the period is unnecessary\nhere.\n\n\n+\t\t\tbool\t\tmaybe_log_conflict =\n+\t\t\t(standbyWaitStart != 0 && !logged_recovery_conflict);\n\nodd indentation.\n\n\n+\t\t/* Also, set the timer if necessary */\n+\t\tif (logging_timer)\n+\t\t{\n+\t\t\ttimeouts[cnt].id = STANDBY_LOCK_TIMEOUT;\n+\t\t\ttimeouts[cnt].type = TMPARAM_AFTER;\n+\t\t\ttimeouts[cnt].delay_ms = DeadlockTimeout;\n+\t\t\tcnt++;\n+\t\t}\n\nThis doesn't consider spurious wakeup. I'm not sure it actually\nhappenes but we usually consider that. That is since before this\npatch, but ProcWaitForSignal()'s comment says that:\n\n> * As this uses the generic process latch the caller has to be robust against\n> * unrelated wakeups: Always check that the desired state has occurred, and\n> * wait again if not.\n\nIf we don't care of spurious wakeups, we should add such a comment.\n\n\n+\t\t\t\tbool\t\tmaybe_log_conflict;\n+\t\t\t\tbool\t\tmaybe_update_title;\n\nAlthough it should be a matter of taste and I understand that the\n\"maybe\" means that \"that logging and changing of ps display may not\nhappen in this iteration\" , that variables seem expressing\nrespectively \"we should write log if the timeout for recovery conflict\nexpires\" and \"we should update title if 500ms elapsed\". So they seem\n*to me* better be just \"log_conflict\" and \"update_title\".\n\nI feel the same with \"maybe_log_conflict\" in ProcSleep().\n\n\n+ for recovery conflicts. This is useful in determining if recovery\n+ conflicts prevent the recovery from applying WAL.\n\n(I'm not confident on this) Isn't the sentence better be in past or\npresent continuous tense?\n\n\n> BTW, attached is the POC patch that implements the idea discussed\n> upthread;\n> if log_recovery_conflict_waits is enabled, the startup process reports\n> the log also after the recovery conflict was resolved and the startup\n> process\n> finished waiting for it. This patch needs to be applied after\n> v11-0002-Log-the-standby-recovery-conflict-waits.patch is applied.\n\nAh. I was just writing a comment about that. I haven't looked it\ncloser but it looks good to me. By the way doesn't it contains a\nsimple fix of a comment for the base patch?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 15 Dec 2020 12:04:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/12/15 12:04, Kyotaro Horiguchi wrote:\n> At Tue, 15 Dec 2020 02:00:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Thanks for the review! I'm thinking to wait half a day before\n>> commiting\n>> the patch just in the case someone may object the patch.\n> \n> Sorry for coming late. I have looked only the latest thread so I\n> should be missing many things so please ignore if it was settled in\n> the past discussion.\n> \n> \n> It emits messages like the follows;\n> \n> [40509:startup] LOG: recovery still waiting after 1021.431 ms: recovery conflict on lock\n> [40509:startup] DETAIL: Conflicting processes: 41171, 41194.\n> [40509:startup] CONTEXT: WAL redo at 0/3013158 for Standby/LOCK: xid 510 db 13609 rel 16384\n> \n> IFAIK DETAIL usually shows ordinary sentences so the first word is\n> capitalized and ends with a period. But it is not a sentence so\n> following period looks odd. a searcheing the tree for errdetails\n> showed some anomalies.\n> \n> src/backend/parser/parse_param.c \t\t\t\t\t errdetail(\"%s versus %s\",\n> src/backend/jit/llvm/llvmjit_error.cpp \t\t\t errdetail(\"while in LLVM\")));\n> src/backend/replication/logical/tablesync.c \t\t\t\t errdetail(\"The error was: %s\", res->err)));\n> src/backend/tcop/postgres.c \t\t\t\terrdetail(\"prepare: %s\", pstmt->plansource->query_string);\n> src/backend/tcop/postgres.c \t\terrdetail(\"abort reason: recovery conflict\");\n> \n> and one similar occurance:\n> \n> src/backend/utils/adt/dbsize.c \t\t\t\t\t errdetail(\"Invalid size unit: \\\"%s\\\".\", strptr),\n> \n> I'm not sure, but it seems to me at least the period is unnecessary\n> here.\n\nSince Error Message Style Guide in the docs says \"Detail and hint messages:\nUse complete sentences, and end each with a period.\", I think that a period\nis necessary here. No?\n\n\n> \n> \n> +\t\t\tbool\t\tmaybe_log_conflict =\n> +\t\t\t(standbyWaitStart != 0 && !logged_recovery_conflict);\n> \n> odd indentation.\n\nThis is the result of pgindent run. I'm not sure why pgindent indents\nthat way, but for now I'd like to follow pgindent.\n\n\n> \n> \n> +\t\t/* Also, set the timer if necessary */\n> +\t\tif (logging_timer)\n> +\t\t{\n> +\t\t\ttimeouts[cnt].id = STANDBY_LOCK_TIMEOUT;\n> +\t\t\ttimeouts[cnt].type = TMPARAM_AFTER;\n> +\t\t\ttimeouts[cnt].delay_ms = DeadlockTimeout;\n> +\t\t\tcnt++;\n> +\t\t}\n> \n> This doesn't consider spurious wakeup. I'm not sure it actually\n> happenes but we usually consider that. That is since before this\n> patch, but ProcWaitForSignal()'s comment says that:\n> \n>> * As this uses the generic process latch the caller has to be robust against\n>> * unrelated wakeups: Always check that the desired state has occurred, and\n>> * wait again if not.\n> \n> If we don't care of spurious wakeups, we should add such a comment.\n\nIf ProcWaitForSignal() wakes up because of the reason (e.g., SIGHUP)\nother than deadlock_timeout, ProcSleep() will call\nResolveRecoveryConflictWithLock() and we sleep on ProcWaitForSignal()\nagain since the recovery conflict has not been resolved yet. So we can\nsay that we consider \"spurious wakeups\"?\n\nHowever when I read the related code again, I found another issue in\nthe patch. In ResolveRecoveryConflictWithLock(), if SIGHUP causes us to\nwake up out of ProcWaitForSignal() before deadlock_timeout is reached,\nwe will use deadlock_timeout again when sleeping on ProcWaitForSignal().\nInstead, probably we should use the \"deadlock_timeout - elapsed time\"\nso that we can emit a log message as soon as deadlock_timeout passes\nsince starting waiting on recovery conflict. Otherwise it may take at most\n\"deadlock_timeout * 2\" to log \"still waiting ...\" message.\n\nTo fix this issue, \"deadlock_timeout - elapsed time\" needs to be used as\nthe timeout when enabling the timer at least in\nResolveRecoveryConflictWithLock() and ResolveRecoveryConflictWithBufferPin().\nAlso similar change needs to be applied to\nResolveRecoveryConflictWithVirtualXIDs().\n\nBTW, without applying the patch, *originally*\nResolveRecoveryConflictWithBufferPin() seems to have this issue.\nIt enables deadlock_timeout timer so as to request for hot-standbfy\nbackends to check themselves for deadlocks. But if we wake up out of\nProcWaitForSignal() before deadlock_timeout is reached, the subsequent\ncall to ResolveRecoveryConflictWithBufferPin() also uses deadlock_timeout\nagain instead of \"deadlock_timeout - elapsed time\". So the request for\ndeadlock check can be delayed. Furthermore,\nif ResolveRecoveryConflictWithBufferPin() always wakes up out of\nProcWaitForSignal() before deadlock_timeout is reached, the request\nfor deadlock check may also never happen infinitely.\n\nMaybe we should fix the original issue at first separately from the patch.\n\n\n> +\t\t\t\tbool\t\tmaybe_log_conflict;\n> +\t\t\t\tbool\t\tmaybe_update_title;\n> \n> Although it should be a matter of taste and I understand that the\n> \"maybe\" means that \"that logging and changing of ps display may not\n> happen in this iteration\" , that variables seem expressing\n> respectively \"we should write log if the timeout for recovery conflict\n> expires\" and \"we should update title if 500ms elapsed\". So they seem\n> *to me* better be just \"log_conflict\" and \"update_title\".\n> \n> I feel the same with \"maybe_log_conflict\" in ProcSleep().\n\nI have no strong opinion about those names. So if other people also\nthink so, I'm ok to rename them.\n\n\n> \n> \n> + for recovery conflicts. This is useful in determining if recovery\n> + conflicts prevent the recovery from applying WAL.\n> \n> (I'm not confident on this) Isn't the sentence better be in past or\n> present continuous tense?\n\nCould you tell me why you think that's better?\n\n\n>> BTW, attached is the POC patch that implements the idea discussed\n>> upthread;\n>> if log_recovery_conflict_waits is enabled, the startup process reports\n>> the log also after the recovery conflict was resolved and the startup\n>> process\n>> finished waiting for it. This patch needs to be applied after\n>> v11-0002-Log-the-standby-recovery-conflict-waits.patch is applied.\n> \n> Ah. I was just writing a comment about that. I haven't looked it\n> closer but it looks good to me. By the way doesn't it contains a\n> simple fix of a comment for the base patch?\n\nYes, so the typo included in the base patch should be fixed when pushing it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 15 Dec 2020 15:40:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2020/12/15 15:40, Fujii Masao wrote:\n> \n> \n> On 2020/12/15 12:04, Kyotaro Horiguchi wrote:\n>> At Tue, 15 Dec 2020 02:00:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> Thanks for the review! I'm thinking to wait half a day before\n>>> commiting\n>>> the patch just in the case someone may object the patch.\n>>\n>> Sorry for coming late. I have looked only the latest thread so I\n>> should be missing many things so please ignore if it was settled in\n>> the past discussion.\n>>\n>>\n>> It emits messages like the follows;\n>>\n>> [40509:startup] LOG: recovery still waiting after 1021.431 ms: recovery conflict on lock\n>> [40509:startup] DETAIL: Conflicting processes: 41171, 41194.\n>> [40509:startup] CONTEXT: WAL redo at 0/3013158 for Standby/LOCK: xid 510 db 13609 rel 16384\n>>\n>> IFAIK DETAIL usually shows ordinary sentences so the first word is\n>> capitalized and ends with a period. But it is not a sentence so\n>> following period looks odd. a searcheing the tree for errdetails\n>> showed some anomalies.\n>>\n>> src/backend/parser/parse_param.c errdetail(\"%s versus %s\",\n>> src/backend/jit/llvm/llvmjit_error.cpp errdetail(\"while in LLVM\")));\n>> src/backend/replication/logical/tablesync.c errdetail(\"The error was: %s\", res->err)));\n>> src/backend/tcop/postgres.c errdetail(\"prepare: %s\", pstmt->plansource->query_string);\n>> src/backend/tcop/postgres.c errdetail(\"abort reason: recovery conflict\");\n>>\n>> and one similar occurance:\n>>\n>> src/backend/utils/adt/dbsize.c errdetail(\"Invalid size unit: \\\"%s\\\".\", strptr),\n>>\n>> I'm not sure, but it seems to me at least the period is unnecessary\n>> here.\n> \n> Since Error Message Style Guide in the docs says \"Detail and hint messages:\n> Use complete sentences, and end each with a period.\", I think that a period\n> is necessary here. No?\n> \n> \n>>\n>>\n>> + bool maybe_log_conflict =\n>> + (standbyWaitStart != 0 && !logged_recovery_conflict);\n>>\n>> odd indentation.\n> \n> This is the result of pgindent run. I'm not sure why pgindent indents\n> that way, but for now I'd like to follow pgindent.\n> \n> \n>>\n>>\n>> + /* Also, set the timer if necessary */\n>> + if (logging_timer)\n>> + {\n>> + timeouts[cnt].id = STANDBY_LOCK_TIMEOUT;\n>> + timeouts[cnt].type = TMPARAM_AFTER;\n>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>> + cnt++;\n>> + }\n>>\n>> This doesn't consider spurious wakeup. I'm not sure it actually\n>> happenes but we usually consider that. That is since before this\n>> patch, but ProcWaitForSignal()'s comment says that:\n>>\n>>> * As this uses the generic process latch the caller has to be robust against\n>>> * unrelated wakeups: Always check that the desired state has occurred, and\n>>> * wait again if not.\n>>\n>> If we don't care of spurious wakeups, we should add such a comment.\n> \n> If ProcWaitForSignal() wakes up because of the reason (e.g., SIGHUP)\n> other than deadlock_timeout, ProcSleep() will call\n> ResolveRecoveryConflictWithLock() and we sleep on ProcWaitForSignal()\n> again since the recovery conflict has not been resolved yet. So we can\n> say that we consider \"spurious wakeups\"?\n> \n> However when I read the related code again, I found another issue in\n> the patch. In ResolveRecoveryConflictWithLock(), if SIGHUP causes us to\n> wake up out of ProcWaitForSignal() before deadlock_timeout is reached,\n> we will use deadlock_timeout again when sleeping on ProcWaitForSignal().\n> Instead, probably we should use the \"deadlock_timeout - elapsed time\"\n> so that we can emit a log message as soon as deadlock_timeout passes\n> since starting waiting on recovery conflict. Otherwise it may take at most\n> \"deadlock_timeout * 2\" to log \"still waiting ...\" message.\n> \n> To fix this issue, \"deadlock_timeout - elapsed time\" needs to be used as\n> the timeout when enabling the timer at least in\n> ResolveRecoveryConflictWithLock() and ResolveRecoveryConflictWithBufferPin().\n> Also similar change needs to be applied to\n> ResolveRecoveryConflictWithVirtualXIDs().\n> \n> BTW, without applying the patch, *originally*\n> ResolveRecoveryConflictWithBufferPin() seems to have this issue.\n> It enables deadlock_timeout timer so as to request for hot-standbfy\n> backends to check themselves for deadlocks. But if we wake up out of\n> ProcWaitForSignal() before deadlock_timeout is reached, the subsequent\n> call to ResolveRecoveryConflictWithBufferPin() also uses deadlock_timeout\n> again instead of \"deadlock_timeout - elapsed time\". So the request for\n> deadlock check can be delayed. Furthermore,\n> if ResolveRecoveryConflictWithBufferPin() always wakes up out of\n> ProcWaitForSignal() before deadlock_timeout is reached, the request\n> for deadlock check may also never happen infinitely.\n> \n> Maybe we should fix the original issue at first separately from the patch.\n\nHmm... commit ac22929a26 seems to make the thing worse. Before that commit,\nother wakeup request like SIGHUP didn't cause ProcWaitForSignal() to\nactually wake up in ResolveRecoveryConflictWithBufferPin(). Because such\nother wakeup requests use the different latch from that that\nProcWaitForSignal() waits on.\n\nBut commit ac22929a26 changed the startup process code so that they\nboth use the same latch. Which could cause ProcWaitForSignal() to be\nmore likely to wake up because of the requests other than deadlock_timeout.\n\nMaybe we need to revert commit ac22929a26.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 15 Dec 2020 16:45:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "At Tue, 15 Dec 2020 15:40:03 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/12/15 12:04, Kyotaro Horiguchi wrote:\n> > [40509:startup] DETAIL: Conflicting processes: 41171, 41194.\n...\n> > I'm not sure, but it seems to me at least the period is unnecessary\n> > here.\n> \n> Since Error Message Style Guide in the docs says \"Detail and hint\n> messages:\n> Use complete sentences, and end each with a period.\", I think that a\n> period\n> is necessary here. No?\n\nIn the first place it is not a complete sentence. Might be better be\nsomething like this if we strictly follow the style guide?\n\n> Conflicting processes are 41171, 41194.\n> Conflicting processes are: 41171, 41194.\n\n> \n> > +\t\t\tbool\t\tmaybe_log_conflict =\n> > +\t\t\t(standbyWaitStart != 0 && !logged_recovery_conflict);\n> > odd indentation.\n> \n> This is the result of pgindent run. I'm not sure why pgindent indents\n> that way, but for now I'd like to follow pgindent.\n\nAgreed.\n\n> > +\t\t/* Also, set the timer if necessary */\n> > +\t\tif (logging_timer)\n> > +\t\t{\n> > +\t\t\ttimeouts[cnt].id = STANDBY_LOCK_TIMEOUT;\n> > +\t\t\ttimeouts[cnt].type = TMPARAM_AFTER;\n> > +\t\t\ttimeouts[cnt].delay_ms = DeadlockTimeout;\n> > +\t\t\tcnt++;\n> > +\t\t}\n> > This doesn't consider spurious wakeup. I'm not sure it actually\n> > happenes but we usually consider that. That is since before this\n> > patch, but ProcWaitForSignal()'s comment says that:\n> > \n> >> * As this uses the generic process latch the caller has to be robust\n> >> * against\n> >> * unrelated wakeups: Always check that the desired state has occurred,\n> >> * and\n> >> * wait again if not.\n> > If we don't care of spurious wakeups, we should add such a comment.\n> \n> If ProcWaitForSignal() wakes up because of the reason (e.g., SIGHUP)\n> other than deadlock_timeout, ProcSleep() will call\n> ResolveRecoveryConflictWithLock() and we sleep on ProcWaitForSignal()\n> again since the recovery conflict has not been resolved yet. So we can\n> say that we consider \"spurious wakeups\"?\n\nSo, the following seems to be spurious wakeups..\n\n> However when I read the related code again, I found another issue in\n> the patch. In ResolveRecoveryConflictWithLock(), if SIGHUP causes us\n> to\n> wake up out of ProcWaitForSignal() before deadlock_timeout is reached,\n> we will use deadlock_timeout again when sleeping on\n> ProcWaitForSignal().\n> Instead, probably we should use the \"deadlock_timeout - elapsed time\"\n> so that we can emit a log message as soon as deadlock_timeout passes\n> since starting waiting on recovery conflict. Otherwise it may take at\n> most\n> \"deadlock_timeout * 2\" to log \"still waiting ...\" message.\n> \n> To fix this issue, \"deadlock_timeout - elapsed time\" needs to be used\n> as\n> the timeout when enabling the timer at least in\n> ResolveRecoveryConflictWithLock() and\n> ResolveRecoveryConflictWithBufferPin().\n> Also similar change needs to be applied to\n> ResolveRecoveryConflictWithVirtualXIDs().\n> \n> BTW, without applying the patch, *originally*\n> ResolveRecoveryConflictWithBufferPin() seems to have this issue.\n> It enables deadlock_timeout timer so as to request for hot-standbfy\n> backends to check themselves for deadlocks. But if we wake up out of\n> ProcWaitForSignal() before deadlock_timeout is reached, the subsequent\n> call to ResolveRecoveryConflictWithBufferPin() also uses\n> deadlock_timeout\n> again instead of \"deadlock_timeout - elapsed time\". So the request for\n> deadlock check can be delayed. Furthermore,\n> if ResolveRecoveryConflictWithBufferPin() always wakes up out of\n> ProcWaitForSignal() before deadlock_timeout is reached, the request\n> for deadlock check may also never happen infinitely.\n> \n> Maybe we should fix the original issue at first separately from the\n> patch.\n\nYeah, it's not an issue of this patch.\n\n> > +\t\t\t\tbool\t\tmaybe_log_conflict;\n> > +\t\t\t\tbool\t\tmaybe_update_title;\n> > Although it should be a matter of taste and I understand that the\n> > \"maybe\" means that \"that logging and changing of ps display may not\n> > happen in this iteration\" , that variables seem expressing\n> > respectively \"we should write log if the timeout for recovery conflict\n> > expires\" and \"we should update title if 500ms elapsed\". So they seem\n> > *to me* better be just \"log_conflict\" and \"update_title\".\n> > I feel the same with \"maybe_log_conflict\" in ProcSleep().\n> \n> I have no strong opinion about those names. So if other people also\n> think so, I'm ok to rename them.\n\nShorter is better as far as it makes sense and not-too abbreviated.\n\n> > + for recovery conflicts. This is useful in determining if recovery\n> > + conflicts prevent the recovery from applying WAL.\n> > (I'm not confident on this) Isn't the sentence better be in past or\n> > present continuous tense?\n> \n> Could you tell me why you think that's better?\n\nTo make sure, I mentioned about the \"prevent\". The reason for that is\nit represents the status at the present or past. I don't insist on\nthat if you don't think it's not better.\n\n> for recovery conflicts. This is useful in determining if recovery\n> conflicts are preventing the recovery from applying WAL.\n\n> >> BTW, attached is the POC patch that implements the idea discussed\n> >> upthread;\n> >> if log_recovery_conflict_waits is enabled, the startup process reports\n> >> the log also after the recovery conflict was resolved and the startup\n> >> process\n> >> finished waiting for it. This patch needs to be applied after\n> >> v11-0002-Log-the-standby-recovery-conflict-waits.patch is applied.\n> > Ah. I was just writing a comment about that. I haven't looked it\n> > closer but it looks good to me. By the way doesn't it contains a\n> > simple fix of a comment for the base patch?\n> \n> Yes, so the typo included in the base patch should be fixed when\n> pushing it.\n\nUnderstood. Thanks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Dec 2020 11:22:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 9:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/05 12:38, Masahiko Sawada wrote:\n> > On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 12/4/20 2:21 AM, Fujii Masao wrote:\n> >>>\n> >>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n> >>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n> >>>> <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>\n> >>>>>\n> >>>>>\n> >>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n> >>>>>> Hi,\n> >>>>>>\n> >>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n> >>>>>>> CAUTION: This email originated from outside of the organization.\n> >>>>>>> Do not click links or open attachments unless you can confirm the\n> >>>>>>> sender and know the content is safe.\n> >>>>>>>\n> >>>>>>>\n> >>>>>>>\n> >>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n> >>>>>>> <alvherre@alvh.no-ip.org> wrote:\n> >>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n> >>>>>>>>\n> >>>>>>>>> + if (proc)\n> >>>>>>>>> + {\n> >>>>>>>>> + if (nprocs == 0)\n> >>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n> >>>>>>>>> + else\n> >>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n> >>>>>>>>> +\n> >>>>>>>>> + nprocs++;\n> >>>>>>>>>\n> >>>>>>>>> What happens if all the backends in wait_list have gone? In\n> >>>>>>>>> other words,\n> >>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n> >>>>>>>>> has not been\n> >>>>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n> >>>>>>>>> In this case, since buf.data is empty, at least there seems no\n> >>>>>>>>> need to log\n> >>>>>>>>> the list of conflicting processes in detail message.\n> >>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n> >>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n> >>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n> >>>>>>>> could\n> >>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n> >>>>>>> +1\n> >>>>>>>\n> >>>>>>> Maybe we can also improve the comment of this function from:\n> >>>>>>>\n> >>>>>>> + * This function also reports the details about the conflicting\n> >>>>>>> + * process ids if *wait_list is not NULL.\n> >>>>>>>\n> >>>>>>> to \" This function also reports the details about the conflicting\n> >>>>>>> process ids if exist\" or something.\n> >>>>>>>\n> >>>>>> Thank you all for the review/remarks.\n> >>>>>>\n> >>>>>> They have been addressed in the new attached patch version.\n> >>>>>\n> >>>>> Thanks for updating the patch! I read through the patch again\n> >>>>> and applied the following chages to it. Attached is the updated\n> >>>>> version of the patch. Could you review this version? If there is\n> >>>>> no issue in it, I'm thinking to commit this version.\n> >>>>\n> >>>> Thank you for updating the patch! I have one question.\n> >>>>\n> >>>>>\n> >>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n> >>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n> >>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n> >>>>>\n> >>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n> >>>>> I changed the code that way.\n> >>>>\n> >>>> As the comment of ResolveRecoveryConflictWithLock() says the\n> >>>> following, a deadlock is detected by the ordinary backend process:\n> >>>>\n> >>>> * Deadlocks involving the Startup process and an ordinary backend\n> >>>> proces\n> >>>> * will be detected by the deadlock detector within the ordinary\n> >>>> backend.\n> >>>>\n> >>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n> >>>> SendRecoveryConflictWithBufferPin() will be called after\n> >>>> DeadlockTimeout passed, but I think it's not necessary for the startup\n> >>>> process in this case.\n> >>>\n> >>> Thanks for pointing this! You are right.\n> >>>\n> >>>\n> >>>> If we want to just wake up the startup process\n> >>>> maybe we can use STANDBY_TIMEOUT here?\n> >>>\n> >> Thanks for the patch updates! Except what we are still discussing below,\n> >> it looks good to me.\n> >>\n> >>> When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n> >>> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n> >>\n> >> Agree\n> >>\n> >>>\n> >>> Or, first of all, we don't need to enable the deadlock timer at all?\n> >>> Since what we'd like to do is to wake up after deadlock_timeout\n> >>> passes, we can do that by changing ProcWaitForSignal() so that it can\n> >>> accept the timeout and giving the deadlock_timeout to it. If we do\n> >>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n> >>> ResolveRecoveryConflictWithLock(). Thought?\n> >\n> > Where do we enable deadlock timeout in hot standby case? You meant to\n> > enable it in ProcWaitForSignal() or where we set a timer for not hot\n> > standby case, in ProcSleep()?\n>\n> No, what I tried to say is to change ResolveRecoveryConflictWithLock() so that it does\n>\n> 1. calculate the \"minimum\" timeout from deadlock_timeout and max_standby_xxx_delay\n> 2. give the calculated timeout value to ProcWaitForSignal()\n> 3. wait for signal and timeout on ProcWaitForSignal()\n>\n> Since ProcWaitForSignal() calls WaitLatch(), seems it's not so difficult to make ProcWaitForSignal() handle the timeout. If we do this, I was thinking that we can get rid of enable_timeouts() from ResolveRecoveryConflictWithLock().\n\nThank you for your explanation! That makes sense to me. Even if we\ndon't have ProcWaitForSignal() handler the timeout perhaps we don't\nneed to set two timeouts. As you mentioned, we can calculate the\nminimum timeout and set it (or nothing).\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 16 Dec 2020 11:55:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 11:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 15 Dec 2020 15:40:03 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >\n> >\n> > On 2020/12/15 12:04, Kyotaro Horiguchi wrote:\n> > > [40509:startup] DETAIL: Conflicting processes: 41171, 41194.\n> ...\n> > > I'm not sure, but it seems to me at least the period is unnecessary\n> > > here.\n> >\n> > Since Error Message Style Guide in the docs says \"Detail and hint\n> > messages:\n> > Use complete sentences, and end each with a period.\", I think that a\n> > period\n> > is necessary here. No?\n>\n> In the first place it is not a complete sentence. Might be better be\n> something like this if we strictly follow the style guide?\n\nFWIW I borrowed the message style in errdetail from log messages in ProcSleep():\n\n(errmsg(\"process %d still waiting for %s on %s after %ld.%03d ms\",\n MyProcPid, modename, buf.data, msecs, usecs),\n (errdetail_log_plural(\"Process holding the lock: %s. Wait queue: %s.\",\n \"Processes holding the lock: %s. Wait queue: %s.\",\n lockHoldersNum, lock_holders_sbuf.data,\nlock_waiters_sbuf.data))));\n\n> > Conflicting processes are 41171, 41194.\n> > Conflicting processes are: 41171, 41194.\n\nIf we use the above message we might want to change other similar\nmessages I exemplified as well.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 16 Dec 2020 12:08:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "At Wed, 16 Dec 2020 12:08:31 +0900, Masahiko Sawada \n<sawada.mshk@gmail.com> wrote in\n> On Wed, Dec 16, 2020 at 11:22 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 15 Dec 2020 15:40:03 +0900, Fujii Masao \n> <masao.fujii@oss.nttdata.com> wrote in\n> > >\n> > >\n> > > On 2020/12/15 12:04, Kyotaro Horiguchi wrote:\n> > > > [40509:startup] DETAIL: Conflicting processes: 41171, 41194.\n> > ...\n> > > > I'm not sure, but it seems to me at least the period is \n> unnecessary\n> > > > here.\n> > >\n> > > Since Error Message Style Guide in the docs says \"Detail and \n> hint\n> > > messages:\n> > > Use complete sentences, and end each with a period.\", I think \n> that a\n> > > period\n> > > is necessary here. No?\n> >\n> > In the first place it is not a complete sentence. Might be better \n> be\n> > something like this if we strictly follow the style guide?\n>\n> FWIW I borrowed the message style in errdetail from log messages in \n> ProcSleep():\n\n> (errmsg(\"process %d still waiting for %s on %s after %ld.%03d ms\",\n> MyProcPid, modename, buf.data, msecs, usecs),\n> (errdetail_log_plural(\"Process holding the lock: %s. Wait queue: \n> %s.\",\n> \"Processes holding the lock: %s. Wait queue: \n> %s.\",\n> lockHoldersNum, lock_holders_sbuf.data,\n> lock_waiters_sbuf.data))));\n\nI was guessing that was the case.\n\n> > > Conflicting processes are 41171, 41194.\n> > > Conflicting processes are: 41171, 41194.\n\nOr I came up with the following after scanning throught the tree.\n\n| Some processes are conflicting: 41171, 41194.\n\n\n> If we use the above message we might want to change other similar\n> messages I exemplified as well.\n\nI'm not sure what should we do for other anomalies. Other errdetails\nof this category (incomplete sentences or the absence of a period) I\nfound are:\n\n-- period is absent\n\npgarch.c\u0000596:\t\terrdetail(\"The failed archive command was: %s\",\npostmaster.c\u00003723:\terrdetail(\"Failed process was running: %s\",\nmatview.c\u0000654:\t\terrdetail(\"Row: %s\",\ntablecmds.c\u00002371:\terrdetail(\"%s versus %s\",\ntablecmds.c\u000011512:\terrdetail(\"%s depends on column \\\"%s\\\"\",\nsubscriptioncmds.c\u00001081:\terrdetail(\"The error was: %s\", err),\ntablesync.c\u0000918:\terrdetail(\"The error was: %s\", res->err)));\nbe-secure-openssl.c\u0000235:\terrdetail(\"\\\"%s\\\" cannot be higher than \n\\\"%s\\\"\",\nauth.c\u00001314:\t\terrdetail_internal(\"SSPI error %x\", (unsigned \nint) r)));\nauth.c\u00002854:\t\terrdetail(\"LDAP diagnostics: %s\", message);\npl_exec.c\u00004386:\t\terrdetail_internal(\"parameters: %s\", \nerrdetail) : 0));\npostgres.c\u00002401:\terrdetail(\"prepare: %s\", \npstmt->plansource->query_string);\n\n-- having a period.\n\nproc.c\u00001479:\t\terrdetail_log_plural(\"Process holding the lock: \n%s. Wait queue: %s.\",\npl_handler.c\u0000106:\tGUC_check_errdetail(\"Unrecognized key word: \n\\\"%s\\\".\", tok);\n\n\nAlthough it dpends on the precise criteria of how they are extracted,\nit seems that the absense of a period is more major.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Dec 2020 14:49:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "At Wed, 16 Dec 2020 14:49:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > > Conflicting processes are 41171, 41194.\n> > > > Conflicting processes are: 41171, 41194.\n> \n> Or I came up with the following after scanning throught the tree.\n> \n> | Some processes are conflicting: 41171, 41194.\n\nOr\n\nSome processes are blocking recovery progress: 41171, 41194.\n\n?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Dec 2020 14:54:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2020/12/15 0:20, Fujii Masao wrote:\n> \n> \n> On 2020/12/14 21:31, Fujii Masao wrote:\n>>\n>>\n>> On 2020/12/05 12:38, Masahiko Sawada wrote:\n>>> On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> On 12/4/20 2:21 AM, Fujii Masao wrote:\n>>>>>\n>>>>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>>>>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n>>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>>>>>> Hi,\n>>>>>>>>\n>>>>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>>>>>> CAUTION: This email originated from outside of the organization.\n>>>>>>>>> Do not click links or open attachments unless you can confirm the\n>>>>>>>>> sender and know the content is safe.\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n>>>>>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>>>>>\n>>>>>>>>>>> + if (proc)\n>>>>>>>>>>> + {\n>>>>>>>>>>> + if (nprocs == 0)\n>>>>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>>>>>> + else\n>>>>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>>>>>> +\n>>>>>>>>>>> + nprocs++;\n>>>>>>>>>>>\n>>>>>>>>>>> What happens if all the backends in wait_list have gone? In\n>>>>>>>>>>> other words,\n>>>>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n>>>>>>>>>>> has not been\n>>>>>>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>>>>>>>>> In this case, since buf.data is empty, at least there seems no\n>>>>>>>>>>> need to log\n>>>>>>>>>>> the list of conflicting processes in detail message.\n>>>>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n>>>>>>>>>> could\n>>>>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>>>>>> +1\n>>>>>>>>>\n>>>>>>>>> Maybe we can also improve the comment of this function from:\n>>>>>>>>>\n>>>>>>>>> + * This function also reports the details about the conflicting\n>>>>>>>>> + * process ids if *wait_list is not NULL.\n>>>>>>>>>\n>>>>>>>>> to \" This function also reports the details about the conflicting\n>>>>>>>>> process ids if exist\" or something.\n>>>>>>>>>\n>>>>>>>> Thank you all for the review/remarks.\n>>>>>>>>\n>>>>>>>> They have been addressed in the new attached patch version.\n>>>>>>>\n>>>>>>> Thanks for updating the patch! I read through the patch again\n>>>>>>> and applied the following chages to it. Attached is the updated\n>>>>>>> version of the patch. Could you review this version? If there is\n>>>>>>> no issue in it, I'm thinking to commit this version.\n>>>>>>\n>>>>>> Thank you for updating the patch! I have one question.\n>>>>>>\n>>>>>>>\n>>>>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>>>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>>>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>>>>>\n>>>>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>>>>>> I changed the code that way.\n>>>>>>\n>>>>>> As the comment of ResolveRecoveryConflictWithLock() says the\n>>>>>> following, a deadlock is detected by the ordinary backend process:\n>>>>>>\n>>>>>> * Deadlocks involving the Startup process and an ordinary backend\n>>>>>> proces\n>>>>>> * will be detected by the deadlock detector within the ordinary\n>>>>>> backend.\n>>>>>>\n>>>>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>>>>>> SendRecoveryConflictWithBufferPin() will be called after\n>>>>>> DeadlockTimeout passed, but I think it's not necessary for the startup\n>>>>>> process in this case.\n>>>>>\n>>>>> Thanks for pointing this! You are right.\n>>>>>\n>>>>>\n>>>>>> If we want to just wake up the startup process\n>>>>>> maybe we can use STANDBY_TIMEOUT here?\n>>>>>\n>>>> Thanks for the patch updates! Except what we are still discussing below,\n>>>> it looks good to me.\n>>>>\n>>>>> When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n>>>>> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n>>>>\n>>>> Agree\n>>>>\n>>>>>\n>>>>> Or, first of all, we don't need to enable the deadlock timer at all?\n>>>>> Since what we'd like to do is to wake up after deadlock_timeout\n>>>>> passes, we can do that by changing ProcWaitForSignal() so that it can\n>>>>> accept the timeout and giving the deadlock_timeout to it. If we do\n>>>>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n>>>>> ResolveRecoveryConflictWithLock(). Thought?\n>>>\n>>> Where do we enable deadlock timeout in hot standby case? You meant to\n>>> enable it in ProcWaitForSignal() or where we set a timer for not hot\n>>> standby case, in ProcSleep()?\n>>\n>> No, what I tried to say is to change ResolveRecoveryConflictWithLock() so that it does\n>>\n>> 1. calculate the \"minimum\" timeout from deadlock_timeout and max_standby_xxx_delay\n>> 2. give the calculated timeout value to ProcWaitForSignal()\n>> 3. wait for signal and timeout on ProcWaitForSignal()\n>>\n>> Since ProcWaitForSignal() calls WaitLatch(), seems it's not so difficult to make ProcWaitForSignal() handle the timeout. If we do this, I was thinking that we can get rid of enable_timeouts() from ResolveRecoveryConflictWithLock().\n>>\n>>\n>>>\n>>>>\n>>>> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it triggers\n>>>> a call to StandbyLockTimeoutHandler() which does nothing, except waking\n>>>> up. That's what we want, right?)\n>>>\n>>> Right, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\n>>> process can wake up and do nothing. Thank you for pointing out.\n>>\n>> Okay, understood! Firstly I was thinking that enabling the same type (i.e., STANDBY_LOCK_TIMEOUT) of lock twice doesn't work properly, but as far as I read the code, it works. In that case, only the shorter timeout would be activated in enable_timeouts(). So I agree to use STANDBY_LOCK_TIMEOUT.\n> \n> So I renamed the argument \"deadlock_timer\" in ResolveRecoveryConflictWithLock()\n> because it's not the timer for deadlock and is confusing. Attached is the\n> updated version of the patch. Barring any objection, I will commit this version.\n\nSince the recent commit 8900b5a9d5 changed the recovery conflict code,\nI updated the patch. Attached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 7 Jan 2021 02:31:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 1/6/21 6:31 PM, Fujii Masao wrote:\n> CAUTION: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n>\n> On 2020/12/15 0:20, Fujii Masao wrote:\n>>\n>>\n>> On 2020/12/14 21:31, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/12/05 12:38, Masahiko Sawada wrote:\n>>>> On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand \n>>>> <bdrouvot@amazon.com> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> On 12/4/20 2:21 AM, Fujii Masao wrote:\n>>>>>>\n>>>>>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>>>>>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n>>>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>>>>>>> Hi,\n>>>>>>>>>\n>>>>>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>>>>>>> CAUTION: This email originated from outside of the organization.\n>>>>>>>>>> Do not click links or open attachments unless you can confirm \n>>>>>>>>>> the\n>>>>>>>>>> sender and know the content is safe.\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n>>>>>>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>>>>>>\n>>>>>>>>>>>> + if (proc)\n>>>>>>>>>>>> + {\n>>>>>>>>>>>> + if (nprocs == 0)\n>>>>>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>>>>>>> + else\n>>>>>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>>>>>>> +\n>>>>>>>>>>>> + nprocs++;\n>>>>>>>>>>>>\n>>>>>>>>>>>> What happens if all the backends in wait_list have gone? In\n>>>>>>>>>>>> other words,\n>>>>>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n>>>>>>>>>>>> has not been\n>>>>>>>>>>>> incrmented at all)? This would very rarely happen, but can \n>>>>>>>>>>>> happen.\n>>>>>>>>>>>> In this case, since buf.data is empty, at least there seems no\n>>>>>>>>>>>> need to log\n>>>>>>>>>>>> the list of conflicting processes in detail message.\n>>>>>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n>>>>>>>>>>> could\n>>>>>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>>>>>>> +1\n>>>>>>>>>>\n>>>>>>>>>> Maybe we can also improve the comment of this function from:\n>>>>>>>>>>\n>>>>>>>>>> + * This function also reports the details about the conflicting\n>>>>>>>>>> + * process ids if *wait_list is not NULL.\n>>>>>>>>>>\n>>>>>>>>>> to \" This function also reports the details about the \n>>>>>>>>>> conflicting\n>>>>>>>>>> process ids if exist\" or something.\n>>>>>>>>>>\n>>>>>>>>> Thank you all for the review/remarks.\n>>>>>>>>>\n>>>>>>>>> They have been addressed in the new attached patch version.\n>>>>>>>>\n>>>>>>>> Thanks for updating the patch! I read through the patch again\n>>>>>>>> and applied the following chages to it. Attached is the updated\n>>>>>>>> version of the patch. Could you review this version? If there is\n>>>>>>>> no issue in it, I'm thinking to commit this version.\n>>>>>>>\n>>>>>>> Thank you for updating the patch! I have one question.\n>>>>>>>\n>>>>>>>>\n>>>>>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>>>>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>>>>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>>>>>>\n>>>>>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>>>>>>> I changed the code that way.\n>>>>>>>\n>>>>>>> As the comment of ResolveRecoveryConflictWithLock() says the\n>>>>>>> following, a deadlock is detected by the ordinary backend process:\n>>>>>>>\n>>>>>>> * Deadlocks involving the Startup process and an ordinary \n>>>>>>> backend\n>>>>>>> proces\n>>>>>>> * will be detected by the deadlock detector within the ordinary\n>>>>>>> backend.\n>>>>>>>\n>>>>>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>>>>>>> SendRecoveryConflictWithBufferPin() will be called after\n>>>>>>> DeadlockTimeout passed, but I think it's not necessary for the \n>>>>>>> startup\n>>>>>>> process in this case.\n>>>>>>\n>>>>>> Thanks for pointing this! You are right.\n>>>>>>\n>>>>>>\n>>>>>>> If we want to just wake up the startup process\n>>>>>>> maybe we can use STANDBY_TIMEOUT here?\n>>>>>>\n>>>>> Thanks for the patch updates! Except what we are still discussing \n>>>>> below,\n>>>>> it looks good to me.\n>>>>>\n>>>>>> When STANDBY_TIMEOUT happens, a request to release conflicting \n>>>>>> buffer\n>>>>>> pins is sent. Right? If so, we should not also use \n>>>>>> STANDBY_TIMEOUT there?\n>>>>>\n>>>>> Agree\n>>>>>\n>>>>>>\n>>>>>> Or, first of all, we don't need to enable the deadlock timer at all?\n>>>>>> Since what we'd like to do is to wake up after deadlock_timeout\n>>>>>> passes, we can do that by changing ProcWaitForSignal() so that it \n>>>>>> can\n>>>>>> accept the timeout and giving the deadlock_timeout to it. If we do\n>>>>>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n>>>>>> ResolveRecoveryConflictWithLock(). Thought?\n>>>>\n>>>> Where do we enable deadlock timeout in hot standby case? You meant to\n>>>> enable it in ProcWaitForSignal() or where we set a timer for not hot\n>>>> standby case, in ProcSleep()?\n>>>\n>>> No, what I tried to say is to change \n>>> ResolveRecoveryConflictWithLock() so that it does\n>>>\n>>> 1. calculate the \"minimum\" timeout from deadlock_timeout and \n>>> max_standby_xxx_delay\n>>> 2. give the calculated timeout value to ProcWaitForSignal()\n>>> 3. wait for signal and timeout on ProcWaitForSignal()\n>>>\n>>> Since ProcWaitForSignal() calls WaitLatch(), seems it's not so \n>>> difficult to make ProcWaitForSignal() handle the timeout. If we do \n>>> this, I was thinking that we can get rid of enable_timeouts() from \n>>> ResolveRecoveryConflictWithLock().\n>>>\n>>>\n>>>>\n>>>>>\n>>>>> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it \n>>>>> triggers\n>>>>> a call to StandbyLockTimeoutHandler() which does nothing, except \n>>>>> waking\n>>>>> up. That's what we want, right?)\n>>>>\n>>>> Right, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\n>>>> process can wake up and do nothing. Thank you for pointing out.\n>>>\n>>> Okay, understood! Firstly I was thinking that enabling the same type \n>>> (i.e., STANDBY_LOCK_TIMEOUT) of lock twice doesn't work properly, \n>>> but as far as I read the code, it works. In that case, only the \n>>> shorter timeout would be activated in enable_timeouts(). So I agree \n>>> to use STANDBY_LOCK_TIMEOUT.\n>>\n>> So I renamed the argument \"deadlock_timer\" in \n>> ResolveRecoveryConflictWithLock()\n>> because it's not the timer for deadlock and is confusing. Attached is \n>> the\n>> updated version of the patch. Barring any objection, I will commit \n>> this version.\n>\n> Since the recent commit 8900b5a9d5 changed the recovery conflict code,\n> I updated the patch. Attached is the updated version of the patch.\n>\nThanks for those updates!\n\nI had a look and the patch does look good to me.\n\nAs far the other threads regarding:\n\n- \"maybe_log_conflict\" and \"maybe_update_title\" naming: I don’t have \nstrong opinions about it but I am more inclined to stay with the “maybe” \nnaming (as it is currently in this patch version) as it better reflects \nthat this may or not occur.\n- the errdetail log message format in LogRecoveryConflict() (currently \nlooks like “Conflicting process: 25118.”) : I don’t have strong opinions \nabout it but I am more inclined to stay as it is, as it looks similar as \nthe format being used in ProcSleep() (even if we can find different \nformats in other places though).\n\nBertrand\n\n\n\n",
"msg_date": "Thu, 7 Jan 2021 14:39:50 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2021/01/07 22:39, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 1/6/21 6:31 PM, Fujii Masao wrote:\n>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>\n>>\n>>\n>> On 2020/12/15 0:20, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/12/14 21:31, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2020/12/05 12:38, Masahiko Sawada wrote:\n>>>>> On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>>>>\n>>>>>> Hi,\n>>>>>>\n>>>>>> On 12/4/20 2:21 AM, Fujii Masao wrote:\n>>>>>>>\n>>>>>>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>>>>>>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n>>>>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>>>>>>>> Hi,\n>>>>>>>>>>\n>>>>>>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>>>>>>>> CAUTION: This email originated from outside of the organization.\n>>>>>>>>>>> Do not click links or open attachments unless you can confirm the\n>>>>>>>>>>> sender and know the content is safe.\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n>>>>>>>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>>>>>>>\n>>>>>>>>>>>>> + if (proc)\n>>>>>>>>>>>>> + {\n>>>>>>>>>>>>> + if (nprocs == 0)\n>>>>>>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>>>>>>>> + else\n>>>>>>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>>>>>>>> +\n>>>>>>>>>>>>> + nprocs++;\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> What happens if all the backends in wait_list have gone? In\n>>>>>>>>>>>>> other words,\n>>>>>>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n>>>>>>>>>>>>> has not been\n>>>>>>>>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>>>>>>>>>>> In this case, since buf.data is empty, at least there seems no\n>>>>>>>>>>>>> need to log\n>>>>>>>>>>>>> the list of conflicting processes in detail message.\n>>>>>>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n>>>>>>>>>>>> could\n>>>>>>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>>>>>>>> +1\n>>>>>>>>>>>\n>>>>>>>>>>> Maybe we can also improve the comment of this function from:\n>>>>>>>>>>>\n>>>>>>>>>>> + * This function also reports the details about the conflicting\n>>>>>>>>>>> + * process ids if *wait_list is not NULL.\n>>>>>>>>>>>\n>>>>>>>>>>> to \" This function also reports the details about the conflicting\n>>>>>>>>>>> process ids if exist\" or something.\n>>>>>>>>>>>\n>>>>>>>>>> Thank you all for the review/remarks.\n>>>>>>>>>>\n>>>>>>>>>> They have been addressed in the new attached patch version.\n>>>>>>>>>\n>>>>>>>>> Thanks for updating the patch! I read through the patch again\n>>>>>>>>> and applied the following chages to it. Attached is the updated\n>>>>>>>>> version of the patch. Could you review this version? If there is\n>>>>>>>>> no issue in it, I'm thinking to commit this version.\n>>>>>>>>\n>>>>>>>> Thank you for updating the patch! I have one question.\n>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>>>>>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>>>>>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>>>>>>>\n>>>>>>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>>>>>>>> I changed the code that way.\n>>>>>>>>\n>>>>>>>> As the comment of ResolveRecoveryConflictWithLock() says the\n>>>>>>>> following, a deadlock is detected by the ordinary backend process:\n>>>>>>>>\n>>>>>>>> * Deadlocks involving the Startup process and an ordinary backend\n>>>>>>>> proces\n>>>>>>>> * will be detected by the deadlock detector within the ordinary\n>>>>>>>> backend.\n>>>>>>>>\n>>>>>>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>>>>>>>> SendRecoveryConflictWithBufferPin() will be called after\n>>>>>>>> DeadlockTimeout passed, but I think it's not necessary for the startup\n>>>>>>>> process in this case.\n>>>>>>>\n>>>>>>> Thanks for pointing this! You are right.\n>>>>>>>\n>>>>>>>\n>>>>>>>> If we want to just wake up the startup process\n>>>>>>>> maybe we can use STANDBY_TIMEOUT here?\n>>>>>>>\n>>>>>> Thanks for the patch updates! Except what we are still discussing below,\n>>>>>> it looks good to me.\n>>>>>>\n>>>>>>> When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n>>>>>>> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n>>>>>>\n>>>>>> Agree\n>>>>>>\n>>>>>>>\n>>>>>>> Or, first of all, we don't need to enable the deadlock timer at all?\n>>>>>>> Since what we'd like to do is to wake up after deadlock_timeout\n>>>>>>> passes, we can do that by changing ProcWaitForSignal() so that it can\n>>>>>>> accept the timeout and giving the deadlock_timeout to it. If we do\n>>>>>>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n>>>>>>> ResolveRecoveryConflictWithLock(). Thought?\n>>>>>\n>>>>> Where do we enable deadlock timeout in hot standby case? You meant to\n>>>>> enable it in ProcWaitForSignal() or where we set a timer for not hot\n>>>>> standby case, in ProcSleep()?\n>>>>\n>>>> No, what I tried to say is to change ResolveRecoveryConflictWithLock() so that it does\n>>>>\n>>>> 1. calculate the \"minimum\" timeout from deadlock_timeout and max_standby_xxx_delay\n>>>> 2. give the calculated timeout value to ProcWaitForSignal()\n>>>> 3. wait for signal and timeout on ProcWaitForSignal()\n>>>>\n>>>> Since ProcWaitForSignal() calls WaitLatch(), seems it's not so difficult to make ProcWaitForSignal() handle the timeout. If we do this, I was thinking that we can get rid of enable_timeouts() from ResolveRecoveryConflictWithLock().\n>>>>\n>>>>\n>>>>>\n>>>>>>\n>>>>>> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it triggers\n>>>>>> a call to StandbyLockTimeoutHandler() which does nothing, except waking\n>>>>>> up. That's what we want, right?)\n>>>>>\n>>>>> Right, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\n>>>>> process can wake up and do nothing. Thank you for pointing out.\n>>>>\n>>>> Okay, understood! Firstly I was thinking that enabling the same type (i.e., STANDBY_LOCK_TIMEOUT) of lock twice doesn't work properly, but as far as I read the code, it works. In that case, only the shorter timeout would be activated in enable_timeouts(). So I agree to use STANDBY_LOCK_TIMEOUT.\n>>>\n>>> So I renamed the argument \"deadlock_timer\" in ResolveRecoveryConflictWithLock()\n>>> because it's not the timer for deadlock and is confusing. Attached is the\n>>> updated version of the patch. Barring any objection, I will commit this version.\n>>\n>> Since the recent commit 8900b5a9d5 changed the recovery conflict code,\n>> I updated the patch. Attached is the updated version of the patch.\n>>\n> Thanks for those updates!\n> \n> I had a look and the patch does look good to me.\n\nThanks for the review! I pushed the latest patch.\n\n\n> \n> As far the other threads regarding:\n> \n> - \"maybe_log_conflict\" and \"maybe_update_title\" naming: I don’t have strong opinions about it but I am more inclined to stay with the “maybe” naming (as it is currently in this patch version) as it better reflects that this may or not occur.\n> - the errdetail log message format in LogRecoveryConflict() (currently looks like “Conflicting process: 25118.”) : I don’t have strong opinions about it but I am more inclined to stay as it is, as it looks similar as the format being used in ProcSleep() (even if we can find different formats in other places though).\n\nAgreed. And if we come up with better idea about those topics,\nwe can improve the code later.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 8 Jan 2021 00:51:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2020/12/15 2:00, Fujii Masao wrote:\n> \n> \n> On 2020/12/15 0:49, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> On 12/14/20 4:20 PM, Fujii Masao wrote:\n>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>\n>>>\n>>>\n>>> On 2020/12/14 21:31, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2020/12/05 12:38, Masahiko Sawada wrote:\n>>>>> On Fri, Dec 4, 2020 at 7:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>>>>\n>>>>>> Hi,\n>>>>>>\n>>>>>> On 12/4/20 2:21 AM, Fujii Masao wrote:\n>>>>>>>\n>>>>>>> On 2020/12/04 9:28, Masahiko Sawada wrote:\n>>>>>>>> On Fri, Dec 4, 2020 at 2:54 AM Fujii Masao\n>>>>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On 2020/12/01 17:29, Drouvot, Bertrand wrote:\n>>>>>>>>>> Hi,\n>>>>>>>>>>\n>>>>>>>>>> On 12/1/20 12:35 AM, Masahiko Sawada wrote:\n>>>>>>>>>>> CAUTION: This email originated from outside of the organization.\n>>>>>>>>>>> Do not click links or open attachments unless you can confirm the\n>>>>>>>>>>> sender and know the content is safe.\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>> On Tue, Dec 1, 2020 at 3:25 AM Alvaro Herrera\n>>>>>>>>>>> <alvherre@alvh.no-ip.org> wrote:\n>>>>>>>>>>>> On 2020-Dec-01, Fujii Masao wrote:\n>>>>>>>>>>>>\n>>>>>>>>>>>>> + if (proc)\n>>>>>>>>>>>>> + {\n>>>>>>>>>>>>> + if (nprocs == 0)\n>>>>>>>>>>>>> + appendStringInfo(&buf, \"%d\", proc->pid);\n>>>>>>>>>>>>> + else\n>>>>>>>>>>>>> + appendStringInfo(&buf, \", %d\", proc->pid);\n>>>>>>>>>>>>> +\n>>>>>>>>>>>>> + nprocs++;\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> What happens if all the backends in wait_list have gone? In\n>>>>>>>>>>>>> other words,\n>>>>>>>>>>>>> how should we handle the case where nprocs == 0 (i.e., nprocs\n>>>>>>>>>>>>> has not been\n>>>>>>>>>>>>> incrmented at all)? This would very rarely happen, but can happen.\n>>>>>>>>>>>>> In this case, since buf.data is empty, at least there seems no\n>>>>>>>>>>>>> need to log\n>>>>>>>>>>>>> the list of conflicting processes in detail message.\n>>>>>>>>>>>> Yes, I noticed this too; this can be simplified by changing the\n>>>>>>>>>>>> condition in the ereport() call to be \"nprocs > 0\" (rather than\n>>>>>>>>>>>> wait_list being null), otherwise not print the errdetail. (You\n>>>>>>>>>>>> could\n>>>>>>>>>>>> test buf.data or buf.len instead, but that seems uglier to me.)\n>>>>>>>>>>> +1\n>>>>>>>>>>>\n>>>>>>>>>>> Maybe we can also improve the comment of this function from:\n>>>>>>>>>>>\n>>>>>>>>>>> + * This function also reports the details about the conflicting\n>>>>>>>>>>> + * process ids if *wait_list is not NULL.\n>>>>>>>>>>>\n>>>>>>>>>>> to \" This function also reports the details about the conflicting\n>>>>>>>>>>> process ids if exist\" or something.\n>>>>>>>>>>>\n>>>>>>>>>> Thank you all for the review/remarks.\n>>>>>>>>>>\n>>>>>>>>>> They have been addressed in the new attached patch version.\n>>>>>>>>>\n>>>>>>>>> Thanks for updating the patch! I read through the patch again\n>>>>>>>>> and applied the following chages to it. Attached is the updated\n>>>>>>>>> version of the patch. Could you review this version? If there is\n>>>>>>>>> no issue in it, I'm thinking to commit this version.\n>>>>>>>>\n>>>>>>>> Thank you for updating the patch! I have one question.\n>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> + timeouts[cnt].id = STANDBY_TIMEOUT;\n>>>>>>>>> + timeouts[cnt].type = TMPARAM_AFTER;\n>>>>>>>>> + timeouts[cnt].delay_ms = DeadlockTimeout;\n>>>>>>>>>\n>>>>>>>>> Maybe STANDBY_TIMEOUT should be STANDBY_DEADLOCK_TIMEOUT here?\n>>>>>>>>> I changed the code that way.\n>>>>>>>>\n>>>>>>>> As the comment of ResolveRecoveryConflictWithLock() says the\n>>>>>>>> following, a deadlock is detected by the ordinary backend process:\n>>>>>>>>\n>>>>>>>> * Deadlocks involving the Startup process and an ordinary backend\n>>>>>>>> proces\n>>>>>>>> * will be detected by the deadlock detector within the ordinary\n>>>>>>>> backend.\n>>>>>>>>\n>>>>>>>> If we use STANDBY_DEADLOCK_TIMEOUT,\n>>>>>>>> SendRecoveryConflictWithBufferPin() will be called after\n>>>>>>>> DeadlockTimeout passed, but I think it's not necessary for the startup\n>>>>>>>> process in this case.\n>>>>>>>\n>>>>>>> Thanks for pointing this! You are right.\n>>>>>>>\n>>>>>>>\n>>>>>>>> If we want to just wake up the startup process\n>>>>>>>> maybe we can use STANDBY_TIMEOUT here?\n>>>>>>>\n>>>>>> Thanks for the patch updates! Except what we are still discussing below,\n>>>>>> it looks good to me.\n>>>>>>\n>>>>>>> When STANDBY_TIMEOUT happens, a request to release conflicting buffer\n>>>>>>> pins is sent. Right? If so, we should not also use STANDBY_TIMEOUT there?\n>>>>>>\n>>>>>> Agree\n>>>>>>\n>>>>>>>\n>>>>>>> Or, first of all, we don't need to enable the deadlock timer at all?\n>>>>>>> Since what we'd like to do is to wake up after deadlock_timeout\n>>>>>>> passes, we can do that by changing ProcWaitForSignal() so that it can\n>>>>>>> accept the timeout and giving the deadlock_timeout to it. If we do\n>>>>>>> this, maybe we can get rid of STANDBY_LOCK_TIMEOUT from\n>>>>>>> ResolveRecoveryConflictWithLock(). Thought?\n>>>>>\n>>>>> Where do we enable deadlock timeout in hot standby case? You meant to\n>>>>> enable it in ProcWaitForSignal() or where we set a timer for not hot\n>>>>> standby case, in ProcSleep()?\n>>>>\n>>>> No, what I tried to say is to change ResolveRecoveryConflictWithLock() so that it does\n>>>>\n>>>> 1. calculate the \"minimum\" timeout from deadlock_timeout and max_standby_xxx_delay\n>>>> 2. give the calculated timeout value to ProcWaitForSignal()\n>>>> 3. wait for signal and timeout on ProcWaitForSignal()\n>>>>\n>>>> Since ProcWaitForSignal() calls WaitLatch(), seems it's not so difficult to make ProcWaitForSignal() handle the timeout. If we do this, I was thinking that we can get rid of enable_timeouts() from ResolveRecoveryConflictWithLock().\n>>>>\n>>>>\n>>>>>\n>>>>>>\n>>>>>> Why not simply use (again) the STANDBY_LOCK_TIMEOUT one? (as it triggers\n>>>>>> a call to StandbyLockTimeoutHandler() which does nothing, except waking\n>>>>>> up. That's what we want, right?)\n>>>>>\n>>>>> Right, what I wanted to mean is STANDBY_LOCK_TIMEOUT. The startup\n>>>>> process can wake up and do nothing. Thank you for pointing out.\n>>>>\n>>>> Okay, understood! Firstly I was thinking that enabling the same type (i.e., STANDBY_LOCK_TIMEOUT) of lock twice doesn't work properly, but as far as I read the code, it works. In that case, only the shorter timeout would be activated in enable_timeouts(). So I agree to use STANDBY_LOCK_TIMEOUT.\n>>>\n>>> So I renamed the argument \"deadlock_timer\" in ResolveRecoveryConflictWithLock()\n>>> because it's not the timer for deadlock and is confusing. Attached is the\n>>> updated version of the patch. Barring any objection, I will commit this version.\n>>\n>> Thanks for the update!\n>>\n>> Indeed the naming is more appropriate and less confusing that way, this version looks all good to me.\n> \n> Thanks for the review! I'm thinking to wait half a day before commiting\n> the patch just in the case someone may object the patch.\n> \n> BTW, attached is the POC patch that implements the idea discussed upthread;\n> if log_recovery_conflict_waits is enabled, the startup process reports\n> the log also after the recovery conflict was resolved and the startup process\n> finished waiting for it. This patch needs to be applied after\n> v11-0002-Log-the-standby-recovery-conflict-waits.patch is applied.\n\n\nAttached is the updated version of the patch. This can be applied to current master.\n\nWith the patch, for example, if the startup process waited longer than\ndeadlock_timeout for the recovery conflict on the lock, the latter log\nmessage in the followings would be additionally output.\n\n LOG: recovery still waiting after 1001.223 ms: recovery conflict on lock\n LOG: recovery finished waiting after 19004.694 ms: recovery conflict on lock\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 8 Jan 2021 01:32:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "At Fri, 8 Jan 2021 01:32:11 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> Attached is the updated version of the patch. This can be applied to\n> current master.\n> \n> With the patch, for example, if the startup process waited longer than\n> deadlock_timeout for the recovery conflict on the lock, the latter log\n> message in the followings would be additionally output.\n> \n> LOG: recovery still waiting after 1001.223 ms: recovery conflict on\n> lock\n> LOG: recovery finished waiting after 19004.694 ms: recovery conflict\n> on lock\n\n+\t\t\t/*\n+\t\t\t * Emit the log message if recovery conflict on buffer pin was resolved but\n+\t\t\t * the startup process waited longer than deadlock_timeout for it.\n\nThe first line is beyond the 80th column.\n\nLGTM other than the above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Jan 2021 11:17:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On 2021/01/08 11:17, Kyotaro Horiguchi wrote:\n> At Fri, 8 Jan 2021 01:32:11 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>> Attached is the updated version of the patch. This can be applied to\n>> current master.\n>>\n>> With the patch, for example, if the startup process waited longer than\n>> deadlock_timeout for the recovery conflict on the lock, the latter log\n>> message in the followings would be additionally output.\n>>\n>> LOG: recovery still waiting after 1001.223 ms: recovery conflict on\n>> lock\n>> LOG: recovery finished waiting after 19004.694 ms: recovery conflict\n>> on lock\n> \n> +\t\t\t/*\n> +\t\t\t * Emit the log message if recovery conflict on buffer pin was resolved but\n> +\t\t\t * the startup process waited longer than deadlock_timeout for it.\n> \n> The first line is beyond the 80th column.\n\nThanks for pointing out this! This happened because I forgot to run pgindent\nfor bufmgr.c. Attached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 8 Jan 2021 13:19:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 1/7/21 4:51 PM, Fujii Masao wrote:\n> Thanks for the review! I pushed the latest patch.\n>\nThanks all of you for your precious help on this patch!\n\nThe original idea behind this thread has been split into 3 pieces.\n\nPieces 1 (9d0bd95fa90a7243047a74e29f265296a9fc556d) and 2 \n(0650ff23038bc3eb8d8fd851744db837d921e285) have now been committed, the \nlast one is to add more information regarding the canceled statements \n(if any), like:\n\n* What was the blocker(s) doing?\n* When did the blocker(s) started their queries (if any)?\n* What was the blocker(s) waiting for? on which wait event?\n\nDoes this proposal sound good to you? If so I'll start a new thread with \na patch proposal.\n\nBertrand\n\n\n\n",
"msg_date": "Fri, 8 Jan 2021 06:02:51 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "Hi,\n\nOn 1/8/21 5:19 AM, Fujii Masao wrote:\n>\n> On 2021/01/08 11:17, Kyotaro Horiguchi wrote:\n>> At Fri, 8 Jan 2021 01:32:11 +0900, Fujii Masao \n>> <masao.fujii@oss.nttdata.com> wrote in\n>>>\n>>> Attached is the updated version of the patch. This can be applied to\n>>> current master.\n>>>\n>>> With the patch, for example, if the startup process waited longer than\n>>> deadlock_timeout for the recovery conflict on the lock, the latter log\n>>> message in the followings would be additionally output.\n>>>\n>>> LOG: recovery still waiting after 1001.223 ms: recovery \n>>> conflict on\n>>> lock\n>>> LOG: recovery finished waiting after 19004.694 ms: recovery \n>>> conflict\n>>> on lock\n>>\n>> + /*\n>> + * Emit the log message if recovery conflict on \n>> buffer pin was resolved but\n>> + * the startup process waited longer than \n>> deadlock_timeout for it.\n>>\n>> The first line is beyond the 80th column.\n>\n> Thanks for pointing out this! This happened because I forgot to run \n> pgindent\n> for bufmgr.c. Attached is the updated version of the patch.\n\nThe patch looks good to me.\n\nBertrand\n\n\n\n",
"msg_date": "Fri, 8 Jan 2021 06:15:26 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\n\nOn 2021/01/08 14:02, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 1/7/21 4:51 PM, Fujii Masao wrote:\n>> Thanks for the review! I pushed the latest patch.\n>>\n> Thanks all of you for your precious help on this patch!\n> \n> The original idea behind this thread has been split into 3 pieces.\n> \n> Pieces 1 (9d0bd95fa90a7243047a74e29f265296a9fc556d) and 2 (0650ff23038bc3eb8d8fd851744db837d921e285) have now been committed, the last one is to add more information regarding the canceled statements (if any), like:\n> \n> * What was the blocker(s) doing?\n\nThis \"canceled statement\" is just one that's canceled by recovery conflict?\nIf so, the blocker is always the startup process? Sorry maybe I fail to\nunderstand this idea well..\n\n\n> * When did the blocker(s) started their queries (if any)?\n\nIf the blocker is the startup process, it doesn't start any query at all?\n\nAnyway if you post the patch, I'm happy to review that!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 8 Jan 2021 15:24:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "\nOn 1/8/21 7:24 AM, Fujii Masao wrote:\n> CAUTION: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n>\n> On 2021/01/08 14:02, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> On 1/7/21 4:51 PM, Fujii Masao wrote:\n>>> Thanks for the review! I pushed the latest patch.\n>>>\n>> Thanks all of you for your precious help on this patch!\n>>\n>> The original idea behind this thread has been split into 3 pieces.\n>>\n>> Pieces 1 (9d0bd95fa90a7243047a74e29f265296a9fc556d) and 2 \n>> (0650ff23038bc3eb8d8fd851744db837d921e285) have now been committed, \n>> the last one is to add more information regarding the canceled \n>> statements (if any), like:\n>>\n>> * What was the blocker(s) doing?\n>\n> This \"canceled statement\" is just one that's canceled by recovery \n> conflict?\n> If so, the blocker is always the startup process? Sorry maybe I fail to\n> understand this idea well..\n>\n>\nBy blocker, I meant the one being canceled (I had in mind the startup \nprocess being the blocked one, not the blocker one). Sorry if i have not \nbeen clear enough.\n\nAs an example, it could provide things like:\n\n2020-06-15 06:48:54.778 UTC [7037] LOG: about to interrupt pid: 7037, \nbackend_type: client backend, state: active, wait_event_type: Timeout, \nwait_event: PgSleep, query_start: 2020-06-15 06:48:13.008427+00\n2020-06-15 06:48:54.778 UTC [7037] ERROR: canceling statement due to \nconflict with recovery\n\n> Anyway if you post the patch, I'm happy to review that!\n\nThanks!\n\nBertrand\n\n\n",
"msg_date": "Fri, 8 Jan 2021 07:42:46 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 2:15 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 1/8/21 5:19 AM, Fujii Masao wrote:\n> >\n> > On 2021/01/08 11:17, Kyotaro Horiguchi wrote:\n> >> At Fri, 8 Jan 2021 01:32:11 +0900, Fujii Masao\n> >> <masao.fujii@oss.nttdata.com> wrote in\n> >>>\n> >>> Attached is the updated version of the patch. This can be applied to\n> >>> current master.\n> >>>\n> >>> With the patch, for example, if the startup process waited longer than\n> >>> deadlock_timeout for the recovery conflict on the lock, the latter log\n> >>> message in the followings would be additionally output.\n> >>>\n> >>> LOG: recovery still waiting after 1001.223 ms: recovery\n> >>> conflict on\n> >>> lock\n> >>> LOG: recovery finished waiting after 19004.694 ms: recovery\n> >>> conflict\n> >>> on lock\n> >>\n> >> + /*\n> >> + * Emit the log message if recovery conflict on\n> >> buffer pin was resolved but\n> >> + * the startup process waited longer than\n> >> deadlock_timeout for it.\n> >>\n> >> The first line is beyond the 80th column.\n> >\n> > Thanks for pointing out this! This happened because I forgot to run\n> > pgindent\n> > for bufmgr.c. Attached is the updated version of the patch.\n>\n> The patch looks good to me.\n\nThanks for the review! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 13 Jan 2021 23:04:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 3:43 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n>\n> On 1/8/21 7:24 AM, Fujii Masao wrote:\n> > CAUTION: This email originated from outside of the organization. Do\n> > not click links or open attachments unless you can confirm the sender\n> > and know the content is safe.\n> >\n> >\n> >\n> > On 2021/01/08 14:02, Drouvot, Bertrand wrote:\n> >> Hi,\n> >>\n> >> On 1/7/21 4:51 PM, Fujii Masao wrote:\n> >>> Thanks for the review! I pushed the latest patch.\n> >>>\n> >> Thanks all of you for your precious help on this patch!\n> >>\n> >> The original idea behind this thread has been split into 3 pieces.\n> >>\n> >> Pieces 1 (9d0bd95fa90a7243047a74e29f265296a9fc556d) and 2\n> >> (0650ff23038bc3eb8d8fd851744db837d921e285) have now been committed,\n> >> the last one is to add more information regarding the canceled\n> >> statements (if any), like:\n> >>\n> >> * What was the blocker(s) doing?\n> >\n> > This \"canceled statement\" is just one that's canceled by recovery\n> > conflict?\n> > If so, the blocker is always the startup process? Sorry maybe I fail to\n> > understand this idea well..\n> >\n> >\n> By blocker, I meant the one being canceled (I had in mind the startup\n> process being the blocked one, not the blocker one).\n\nThanks! Understood.\n\n> Sorry if i have not\n> been clear enough.\n>\n> As an example, it could provide things like:\n>\n> 2020-06-15 06:48:54.778 UTC [7037] LOG: about to interrupt pid: 7037,\n> backend_type: client backend, state: active, wait_event_type: Timeout,\n> wait_event: PgSleep, query_start: 2020-06-15 06:48:13.008427+00\n\nSorry I'm not sure yet how this information is actually useful.\nWhat is the actual use case of this information?\n\nMaybe this topic should be discussed in new thread.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 13 Jan 2021 23:14:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Information during standby recovery conflicts"
}
] |
[
{
"msg_contents": "Hello, hackers.\n\nI would like to propose a patch, which allows passing one extra \nparameter to pg_create_physical_replication_slot() — restart_lsn. It \ncould be very helpful if we already have some backup with STOP_LSN from \na couple of hours in the past and we want to quickly verify wether it is \npossible to create a replica from this backup or not.\n\nIf the WAL segment for the specified restart_lsn (STOP_LSN of the \nbackup) exists, then the function will create a physical replication \nslot and will keep all the WAL segments required by the replica to catch \nup with the primary. Otherwise, it returns error, which means that the \nrequired WAL segments have been already utilised, so we do need to take \na new backup. Without passing this newly added parameter \npg_create_physical_replication_slot() works as before.\n\nWhat do you think about this?\n\n-- \nVyacheslav Makarov\n\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 18 Jun 2020 15:39:09 +0300",
"msg_from": "Vyacheslav Makarov <v.makarov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "[PATCH] Allow to specify restart_lsn in\n pg_create_physical_replication_slot()"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 03:39:09PM +0300, Vyacheslav Makarov wrote:\n> If the WAL segment for the specified restart_lsn (STOP_LSN of the backup)\n> exists, then the function will create a physical replication slot and will\n> keep all the WAL segments required by the replica to catch up with the\n> primary. Otherwise, it returns error, which means that the required WAL\n> segments have been already utilised, so we do need to take a new backup.\n> Without passing this newly added parameter\n> pg_create_physical_replication_slot() works as before.\n> \n> What do you think about this?\n\nI think that this was discussed in the past (perhaps one of the\nthreads related to WAL advancing actually?), and this stuff is full of\nholes when it comes to think about error handling with checkpoints\nrunning in parallel, potentially doing recycling of segments you would\nexpect to be around based on your input value for restart_lsn *while*\npg_create_physical_replication_slot() is still running and\nmanipulating the on-disk slot information. I suspect that this also\nbreaks a couple of assumptions behind concurrent calls of the minimum\nLSN calculated across slots when a caller sees fit to recompute the\nthresholds (WAL senders mainly here, depending on the replication\nactivity).\n--\nMichael",
"msg_date": "Fri, 19 Jun 2020 09:59:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow to specify restart_lsn in\n pg_create_physical_replication_slot()"
},
{
"msg_contents": "On 2020-06-19 03:59, Michael Paquier wrote:\n> On Thu, Jun 18, 2020 at 03:39:09PM +0300, Vyacheslav Makarov wrote:\n>> If the WAL segment for the specified restart_lsn (STOP_LSN of the \n>> backup)\n>> exists, then the function will create a physical replication slot and \n>> will\n>> keep all the WAL segments required by the replica to catch up with the\n>> primary. Otherwise, it returns error, which means that the required \n>> WAL\n>> segments have been already utilised, so we do need to take a new \n>> backup.\n>> Without passing this newly added parameter\n>> pg_create_physical_replication_slot() works as before.\n>> \n>> What do you think about this?\n> \n> I think that this was discussed in the past (perhaps one of the\n> threads related to WAL advancing actually?),\n> \n\nI have searched through the archives a bit and found one thread related \nto slots advancing [1]. It was dedicated to a problem of advancing slots \nwhich do not reserve WAL yet, if I get it correctly. Although it is \nsomehow related to the topic, it was a slightly different issue, IMO.\n\n> \n> and this stuff is full of\n> holes when it comes to think about error handling with checkpoints\n> running in parallel, potentially doing recycling of segments you would\n> expect to be around based on your input value for restart_lsn *while*\n> pg_create_physical_replication_slot() is still running and\n> manipulating the on-disk slot information. I suspect that this also\n> breaks a couple of assumptions behind concurrent calls of the minimum\n> LSN calculated across slots when a caller sees fit to recompute the\n> thresholds (WAL senders mainly here, depending on the replication\n> activity).\n> \n\nThese are the right concerns, but all of them should be applicable to \nthe pg_create_physical_replication_slot() + immediately_reserve == true \nin the same way, doesn't it? I think so, since in that case we are doing \na pretty similar thing — trying to reserve some WAL segment that may be \nconcurrently deleted.\n\nAnd this is exactly the reason why ReplicationSlotReserveWal() does it \nin several steps in a loop:\n\n1. Creates a slot with some restart_lsn.\n2. Does ReplicationSlotsComputeRequiredLSN() to prevent removal of the \nWAL segment with this restart_lsn.\n3. Checks that required WAL segment is still there.\n4. Repeat if this attempt to prevent WAL removal has failed.\n\nI guess that the only difference in the case of proposed scenario is \nthat we do not have a chance for step 4, since we do need some specific \nrestart_lsn, not any recent restart_lsn, i.e. in this case we have to:\n\n1. Create a slot with restart_lsn specified by user.\n2. Do ReplicationSlotsComputeRequiredLSN() to prevent WAL removal.\n3. Check that required WAL segment is still there and report ERROR to \nthe user if it is not.\n\nI have eyeballed the attached patch and it looks like doing exactly the \nsame, so issues with concurrent deletion are not obvious for me. Or, \nthere are should be the same issues for \npg_create_physical_replication_slot() + immediately_reserve == true with \ncurrent master implementation.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/20180626071305.GH31353%40paquier.xyz\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Fri, 19 Jun 2020 17:20:11 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow to specify restart_lsn in\n pg_create_physical_replication_slot()"
},
{
"msg_contents": "\n\nOn 2020/06/19 23:20, Alexey Kondratov wrote:\n> On 2020-06-19 03:59, Michael Paquier wrote:\n>> On Thu, Jun 18, 2020 at 03:39:09PM +0300, Vyacheslav Makarov wrote:\n>>> If the WAL segment for the specified restart_lsn (STOP_LSN of the backup)\n>>> exists, then the function will create a physical replication slot and will\n>>> keep all the WAL segments required by the replica to catch up with the\n>>> primary. Otherwise, it returns error, which means that the required WAL\n>>> segments have been already utilised, so we do need to take a new backup.\n>>> Without passing this newly added parameter\n>>> pg_create_physical_replication_slot() works as before.\n>>>\n>>> What do you think about this?\n\nCurrently pg_create_physical_replication_slot() and CREATE_REPLICATION_SLOT\nreplication command seem to be \"idential\". So if we add new option into one,\nwe should add it also into another?\n\n\nWhat happen if future LSN is specified in restart_lsn? With the patch,\nin this case, if the segment at that LSN exists (e.g., because it's recycled\none), the slot seems to be successfully created. However if the LSN is\nfar future and the segment doesn't exist, the creation of slot seems to fail.\nThis behavior looks fragile and confusing. We should accept future LSN\nwhether its segment currently exists or not?\n\n\n+\tif (!RecoveryInProgress() && !SlotIsLogical(MyReplicationSlot))\n\nWith the patch, the given restart_lsn seems to be ignored during recovery.\nWhy?\n\n>>\n>> I think that this was discussed in the past (perhaps one of the\n>> threads related to WAL advancing actually?),\n>>\n> \n> I have searched through the archives a bit and found one thread related to slots advancing [1]. It was dedicated to a problem of advancing slots which do not reserve WAL yet, if I get it correctly. Although it is somehow related to the topic, it was a slightly different issue, IMO.\n> \n>>\n>> and this stuff is full of\n>> holes when it comes to think about error handling with checkpoints\n>> running in parallel, potentially doing recycling of segments you would\n>> expect to be around based on your input value for restart_lsn *while*\n>> pg_create_physical_replication_slot() is still running and\n>> manipulating the on-disk slot information. I suspect that this also\n>> breaks a couple of assumptions behind concurrent calls of the minimum\n>> LSN calculated across slots when a caller sees fit to recompute the\n>> thresholds (WAL senders mainly here, depending on the replication\n>> activity).\n>>\n> \n> These are the right concerns, but all of them should be applicable to the pg_create_physical_replication_slot() + immediately_reserve == true in the same way, doesn't it? I think so, since in that case we are doing a pretty similar thing — trying to reserve some WAL segment that may be concurrently deleted.\n> \n> And this is exactly the reason why ReplicationSlotReserveWal() does it in several steps in a loop:\n> \n> 1. Creates a slot with some restart_lsn.\n> 2. Does ReplicationSlotsComputeRequiredLSN() to prevent removal of the WAL segment with this restart_lsn.\n> 3. Checks that required WAL segment is still there.\n> 4. Repeat if this attempt to prevent WAL removal has failed.\n\nWhat happens if concurrent checkpoint decides to remove the segment\nat restart_lsn before #2 and then actually removes it after #3?\nThe replication slot is successfully created with the given restart_lsn,\nbut the reserved segment has already been removed?\n\n\n> I guess that the only difference in the case of proposed scenario is that we do not have a chance for step 4, since we do need some specific restart_lsn, not any recent restart_lsn, i.e. in this case we have to:\n> \n> 1. Create a slot with restart_lsn specified by user.\n> 2. Do ReplicationSlotsComputeRequiredLSN() to prevent WAL removal.\n> 3. Check that required WAL segment is still there and report ERROR to the user if it is not.\n\nThe similar situation as the above may happen.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 20 Jun 2020 03:57:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow to specify restart_lsn in\n pg_create_physical_replication_slot()"
},
{
"msg_contents": "On 2020-06-19 21:57, Fujii Masao wrote:\n> On 2020/06/19 23:20, Alexey Kondratov wrote:\n>> On 2020-06-19 03:59, Michael Paquier wrote:\n>>> On Thu, Jun 18, 2020 at 03:39:09PM +0300, Vyacheslav Makarov wrote:\n>>>> If the WAL segment for the specified restart_lsn (STOP_LSN of the \n>>>> backup)\n>>>> exists, then the function will create a physical replication slot \n>>>> and will\n>>>> keep all the WAL segments required by the replica to catch up with \n>>>> the\n>>>> primary. Otherwise, it returns error, which means that the required \n>>>> WAL\n>>>> segments have been already utilised, so we do need to take a new \n>>>> backup.\n>>>> Without passing this newly added parameter\n>>>> pg_create_physical_replication_slot() works as before.\n>>>> \n>>>> What do you think about this?\n> \n> Currently pg_create_physical_replication_slot() and \n> CREATE_REPLICATION_SLOT\n> replication command seem to be \"idential\". So if we add new option into \n> one,\n> we should add it also into another?\n> \n\nI wonder how it could be used via the replication protocol, but probably \nthis option should be added there as well for consistency.\n\n> \n> What happen if future LSN is specified in restart_lsn? With the patch,\n> in this case, if the segment at that LSN exists (e.g., because it's \n> recycled\n> one), the slot seems to be successfully created. However if the LSN is\n> far future and the segment doesn't exist, the creation of slot seems to \n> fail.\n> This behavior looks fragile and confusing. We should accept future LSN\n> whether its segment currently exists or not?\n> \n\nBut what about a possible timeline switch? If we allow specifying it as \nfurther in the future as one wanted, then appropriate segment with \nspecified LSN may be created in the different timeline if it would be \nswitched, so it may be misleading. I am not even sure about allowing \nfuture LSN for existing segments, since PITR / timeline switch may occur \njust after the slot creation, so the pointer may never be valid. Would \nit be better to completely disallow future LSN?\n\nAnd here I noticed another moment in the patch. TimeLineID of the last \nrestart/checkpoint is used to detect whether WAL segment file exists or \nnot. It means that if we try to create a slot just after a timeline \nswitch, then we could not specify the oldest LSN actually available on \nthe disk, since it may be from the previous timeline. One can use LSN \nonly within the current timeline. It seems to be fine, but should be \ncovered in the docs.\n\n> \n> +\tif (!RecoveryInProgress() && !SlotIsLogical(MyReplicationSlot))\n> \n> With the patch, the given restart_lsn seems to be ignored during \n> recovery.\n> Why?\n> \n\nI have the same question, not sure that this is needed here. It looks \nmore like a forgotten copy-paste from ReplicationSlotReserveWal().\n\n>>> \n>>> I think that this was discussed in the past (perhaps one of the\n>>> threads related to WAL advancing actually?),\n>>> \n>> \n>> I have searched through the archives a bit and found one thread \n>> related to slots advancing [1]. It was dedicated to a problem of \n>> advancing slots which do not reserve WAL yet, if I get it correctly. \n>> Although it is somehow related to the topic, it was a slightly \n>> different issue, IMO.\n>> \n>>> \n>>> and this stuff is full of\n>>> holes when it comes to think about error handling with checkpoints\n>>> running in parallel, potentially doing recycling of segments you \n>>> would\n>>> expect to be around based on your input value for restart_lsn *while*\n>>> pg_create_physical_replication_slot() is still running and\n>>> manipulating the on-disk slot information.\n>>> ...\n>> \n>> These are the right concerns, but all of them should be applicable to \n>> the pg_create_physical_replication_slot() + immediately_reserve == \n>> true in the same way, doesn't it? I think so, since in that case we \n>> are doing a pretty similar thing — trying to reserve some WAL segment \n>> that may be concurrently deleted.\n>> \n>> And this is exactly the reason why ReplicationSlotReserveWal() does it \n>> in several steps in a loop:\n>> \n>> 1. Creates a slot with some restart_lsn.\n>> 2. Does ReplicationSlotsComputeRequiredLSN() to prevent removal of the \n>> WAL segment with this restart_lsn.\n>> 3. Checks that required WAL segment is still there.\n>> 4. Repeat if this attempt to prevent WAL removal has failed.\n> \n> What happens if concurrent checkpoint decides to remove the segment\n> at restart_lsn before #2 and then actually removes it after #3?\n> The replication slot is successfully created with the given \n> restart_lsn,\n> but the reserved segment has already been removed?\n> \n\nI though about it a bit more and it seems that yes, there is a race even \nfor a current pg_create_physical_replication_slot() + \nimmediately_reserve == true, i.e. ReplicationSlotReserveWal(). However, \nthe chance is very subtle since we take a current GetRedoRecPtr() there. \nProbably one could reproduce it with wal_keep_segments = 1 by holding / \nreleasing backend doing the slot creation and checkpointer with gdb, but \nnot sure that it is an issue anywhere in the real world.\n\nMaybe I am wrong, but it is not clear for me why current \nReplicationSlotReserveWal() routine does not have that race. I will try \nto reproduce it though.\n\nThings get worse when we allow specifying an older LSN, since it has a \nhigher chances to be at the horizon of deletion by checkpointer. Anyway, \nif I get it correctly, with a current patch slot will be created \nsuccessfully, but will be obsolete and should be invalidated by the next \ncheckpoint.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Mon, 22 Jun 2020 20:18:58 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow to specify restart_lsn in\n pg_create_physical_replication_slot()"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 08:18:58PM +0300, Alexey Kondratov wrote:\n> I wonder how it could be used via the replication protocol, but probably\n> this option should be added there as well for consistency.\n\nMostly the same code path is taken by the SQL function and the\nreplication command, so adding a new option to both when adding a new\noption makes sense to me for consistency. The SQL functions are\nactually easier to use when it comes to tests, as there is no need to\nworry about COPY_BOTH not supported in psql.\n\n> Things get worse when we allow specifying an older LSN, since it has a\n> higher chances to be at the horizon of deletion by checkpointer. Anyway, if\n> I get it correctly, with a current patch slot will be created successfully,\n> but will be obsolete and should be invalidated by the next checkpoint.\n\nIs that a behavior acceptable for the end user? For example, a\nphysical slot that is created to immediately reserve WAL may get \ninvalidated, causing it to actually not keep WAL around contrary to\nwhat the user has wanted the command to do.\n--\nMichael",
"msg_date": "Tue, 23 Jun 2020 10:18:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow to specify restart_lsn in\n pg_create_physical_replication_slot()"
},
{
"msg_contents": "On 2020-06-23 04:18, Michael Paquier wrote:\n> On Mon, Jun 22, 2020 at 08:18:58PM +0300, Alexey Kondratov wrote:\n>> Things get worse when we allow specifying an older LSN, since it has a\n>> higher chances to be at the horizon of deletion by checkpointer. \n>> Anyway, if\n>> I get it correctly, with a current patch slot will be created \n>> successfully,\n>> but will be obsolete and should be invalidated by the next checkpoint.\n> \n> Is that a behavior acceptable for the end user? For example, a\n> physical slot that is created to immediately reserve WAL may get\n> invalidated, causing it to actually not keep WAL around contrary to\n> what the user has wanted the command to do.\n> \n\nI can imagine that it could be acceptable in the initially proposed \nscenario for someone, since creation of a slot with historical \nrestart_lsn is already unpredictable — required segment may exist or may \ndo not exist. However, adding here an undefined behaviour even after a \nslot creation does not look good to me anyway.\n\nI have looked closely on the checkpointer code and another problem is \nthat it decides once which WAL segments to delete based on the \nreplicationSlotMinLSN, and does not check anything before the actual \nfile deletion. That way the gap for a possible race is even wider. I do \nnot know how to completely get rid of this race without introducing of \nsome locking mechanism, which may be costly.\n\nThanks for feedback\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Sat, 27 Jun 2020 00:08:03 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow to specify restart_lsn in\n pg_create_physical_replication_slot()"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile checking copy from code I found that the function parameter\ncolumn_no is not used in CopyReadBinaryAttribute. I felt this could be\nremoved.\nAttached patch contains the changes for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 18 Jun 2020 19:00:57 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Cleanup - Removal of unused function parameter from\n CopyReadBinaryAttribute"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 7:01 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> While checking copy from code I found that the function parameter\n> column_no is not used in CopyReadBinaryAttribute. I felt this could be\n> removed.\n> Attached patch contains the changes for the same.\n> Thoughts?\n>\n\nI don't see any problem in removing this extra parameter.\n\nHowever another thought, can it be used to report a bit meaningful\nerror for field size < 0 check?\n\nif (fld_size < 0)\n ereport(ERROR,\n (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n errmsg(\"invalid field size for column %d\", column_no)));\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jun 2020 19:39:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup - Removal of unused function parameter from\n CopyReadBinaryAttribute"
},
{
"msg_contents": "\n\nOn 2020/06/18 23:09, Bharath Rupireddy wrote:\n> On Thu, Jun 18, 2020 at 7:01 PM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> While checking copy from code I found that the function parameter\n>> column_no is not used in CopyReadBinaryAttribute. I felt this could be\n>> removed.\n>> Attached patch contains the changes for the same.\n>> Thoughts?\n>>\n> \n> I don't see any problem in removing this extra parameter.\n> \n> However another thought, can it be used to report a bit meaningful\n> error for field size < 0 check?\n\ncolumn_no was used for that purpose in the past, but commit 0e319c7ad7\nchanged that. If we want to use column_no in the log message again,\nit's better to check why commit 0e319c7ad7 got rid of column_no from\nthe message.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Jun 2020 01:35:06 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup - Removal of unused function parameter from\n CopyReadBinaryAttribute"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 10:05 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n> On 2020/06/18 23:09, Bharath Rupireddy wrote:\n> > On Thu, Jun 18, 2020 at 7:01 PM vignesh C <vignesh21@gmail.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> While checking copy from code I found that the function parameter\n> >> column_no is not used in CopyReadBinaryAttribute. I felt this could be\n> >> removed.\n> >> Attached patch contains the changes for the same.\n> >> Thoughts?\n> >>\n> >\n> > I don't see any problem in removing this extra parameter.\n> >\n> > However another thought, can it be used to report a bit meaningful\n> > error for field size < 0 check?\n>\n> column_no was used for that purpose in the past, but commit 0e319c7ad7\n> changed that.\n>\n\nYeah, but not sure why? By looking at the commit message and change\nit is difficult to say why it has been removed? Tom has made that\nchange but I don't think he would remember it, in any case, adding him\nin the email to see if he remembers anything related to it.\n\ncommit 0e319c7ad7665673103f0b10752700fd2f33acd3\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Sep 29 22:06:40 2003 +0000\n\n Improve context display for failures during COPY IN, as recently\n discussed on pghackers.\n..\n..\n@@ -1917,7 +2019,7 @@ CopyReadBinaryAttribute(int column_no, FmgrInfo\n*flinfo, Oid typelem,\n if (fld_size < 0)\n ereport(ERROR,\n (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n- errmsg(\"invalid size for field %d\", column_no)));\n+ errmsg(\"invalid field size\")));\n\n /* reset attribute_buf to empty, and load raw data in it */\n attribute_buf.len = 0;\n@@ -1944,8 +2046,7 @@ CopyReadBinaryAttribute(int column_no, FmgrInfo\n*flinfo, Oid typelem,\n if (attribute_buf.cursor != attribute_buf.len)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_BINARY_REPRESENTATION),\n- errmsg(\"incorrect binary data format in field %d\",\n- column_no)));\n+ errmsg(\"incorrect binary data format\")));\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jun 2020 08:30:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup - Removal of unused function parameter from\n CopyReadBinaryAttribute"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Thu, Jun 18, 2020 at 10:05 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> column_no was used for that purpose in the past, but commit 0e319c7ad7\n>> changed that.\n\n> Yeah, but not sure why? By looking at the commit message and change\n> it is difficult to say why it has been removed? Tom has made that\n> change but I don't think he would remember it, in any case, adding him\n> in the email to see if he remembers anything related to it.\n\nHm, no, that commit is nearly old enough to vote :-(\n\nHowever, I dug around in the archives, and I found what seems to be\nthe relevant pghackers thread:\n\nhttps://www.postgresql.org/message-id/flat/28188.1064615075%40sss.pgh.pa.us#8e0c07452bb7e729829d456cfb0ec485\n\nLooking at that, I think I concluded that these error cases are not useful\nindications of problems within the specific column's data, but most likely\nindicate corruption at the level of the overall COPY line format; ergo the\nline-level context display is sufficient. You could quibble with that\nconclusion of course, but if you agree with it, then the column_no\nparameter is useless here. I probably just failed to notice at the time\nthat the parameter was otherwise unused, else I would have removed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 23:48:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup - Removal of unused function parameter from\n CopyReadBinaryAttribute"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 9:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Thu, Jun 18, 2020 at 10:05 PM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >> column_no was used for that purpose in the past, but commit 0e319c7ad7\n> >> changed that.\n>\n> > Yeah, but not sure why? By looking at the commit message and change\n> > it is difficult to say why it has been removed? Tom has made that\n> > change but I don't think he would remember it, in any case, adding him\n> > in the email to see if he remembers anything related to it.\n>\n> Hm, no, that commit is nearly old enough to vote :-(\n>\n> However, I dug around in the archives, and I found what seems to be\n> the relevant pghackers thread:\n>\n> https://www.postgresql.org/message-id/flat/28188.1064615075%40sss.pgh.pa.us#8e0c07452bb7e729829d456cfb0ec485\n>\n> Looking at that, I think I concluded that these error cases are not useful\n> indications of problems within the specific column's data, but most likely\n> indicate corruption at the level of the overall COPY line format; ergo the\n> line-level context display is sufficient. You could quibble with that\n> conclusion of course, but if you agree with it, then the column_no\n> parameter is useless here.\n>\n\nI don't see any problem with your conclusion and the fact that we\nhaven't came across any case which requires column_no in such messages\nfavors your conclusion.\n\n> I probably just failed to notice at the time\n> that the parameter was otherwise unused, else I would have removed it.\n>\n\nNo issues, I can take care of this (probably in HEAD only).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jun 2020 09:36:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup - Removal of unused function parameter from\n CopyReadBinaryAttribute"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 10:05 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/06/18 23:09, Bharath Rupireddy wrote:\n> > On Thu, Jun 18, 2020 at 7:01 PM vignesh C <vignesh21@gmail.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> While checking copy from code I found that the function parameter\n> >> column_no is not used in CopyReadBinaryAttribute. I felt this could be\n> >> removed.\n> >> Attached patch contains the changes for the same.\n> >> Thoughts?\n> >>\n> >\n> > I don't see any problem in removing this extra parameter.\n> >\n> > However another thought, can it be used to report a bit meaningful\n> > error for field size < 0 check?\n>\n> column_no was used for that purpose in the past, but commit 0e319c7ad7\n> changed that. If we want to use column_no in the log message again,\n> it's better to check why commit 0e319c7ad7 got rid of column_no from\n> the message.\n\nI noticed that displaying of column information is present and it is\ndone in a different way. Basically cstate->cur_attname is set with the\ncolumn name before calling CopyReadBinaryAttribute function. If there\nis any error in CopyReadBinaryAttribute function,\nCopyFromErrorCallback will be called. CopyFromErrorCallback function\ntakes care of displaying the column name by using cstate->cur_attname.\nI tried simulating this and it displays the column name neatly in the\nerror message.:\npostgres=# copy t1 from '/home/db/copydata/t1_copy.bin' with (format 'binary');\nERROR: invalid field size\nCONTEXT: COPY t1, line 1, column c1\nI feel we can safely remove the parameter as in the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 19 Jun 2020 10:46:47 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup - Removal of unused function parameter from\n CopyReadBinaryAttribute"
},
{
"msg_contents": "> > > I don't see any problem in removing this extra parameter.\n> > >\n> > > However another thought, can it be used to report a bit meaningful\n> > > error for field size < 0 check?\n> >\n> > column_no was used for that purpose in the past, but commit 0e319c7ad7\n> > changed that. If we want to use column_no in the log message again,\n> > it's better to check why commit 0e319c7ad7 got rid of column_no from\n> > the message.\n>\n> I noticed that displaying of column information is present and it is\n> done in a different way. Basically cstate->cur_attname is set with the\n> column name before calling CopyReadBinaryAttribute function. If there\n> is any error in CopyReadBinaryAttribute function,\n> CopyFromErrorCallback will be called. CopyFromErrorCallback function\n> takes care of displaying the column name by using cstate->cur_attname.\n> I tried simulating this and it displays the column name neatly in the\n> error message.:\n> postgres=# copy t1 from '/home/db/copydata/t1_copy.bin' with (format 'binary');\n> ERROR: invalid field size\n> CONTEXT: COPY t1, line 1, column c1\n> I feel we can safely remove the parameter as in the patch.\n>\n\nthanks for this information.\n\n+1 for this patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jun 2020 10:59:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup - Removal of unused function parameter from\n CopyReadBinaryAttribute"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nOn the back of this thread[1] over at pgsql-general, I've attached a patch that marks the functions in btree_gist as PARALLEL SAFE.\r\nThis is primarily to allow parallel plans to be considered when btree_gist's <-> operator is used in any context; for example in an expression\r\nthat will be evaluated at execution time, or in a functional column in btree or gist indexes.\r\n\r\nIn the latter example, despite the functions already being marked IMMUTABLE, attempts to retrieve precomputed values from a functional index during an index scan or index-only scan still require the function to be marked PARALLEL SAFE to prevent dropping down to a serial plan.\r\n\r\nIt requires btree_gist's version to be bumped to 1.6.\r\n\r\nIn line with this commit[2], and for the same reasons, all functions defined by btree_gist are being marked as safe:\r\n\r\n---\r\n\r\n\"... Note that some of the markings added by this commit don't have any\r\neffect; for example, gseg_picksplit() isn't likely to be mentioned\r\nexplicitly in a query and therefore it's parallel-safety marking will\r\nnever be consulted. But this commit just marks everything for\r\nconsistency: if it were somehow used in a query, that would be fine as\r\nfar as parallel query is concerned, since it does not consult any\r\nbackend-private state, attempt to write data, etc.\"\r\n\r\n---\r\n\r\nI haven't added any more tests, but neither did I find any added with the above commit.\r\n\r\n\"CREATE EXTENSION btree_gist\" runs successfully, as does \"make check-world\".\r\n\r\nThis is the first patch I've submitted, so if I've omitted something then please let me know.\r\nThanks for your time,\r\nSteven.\r\n\r\n\r\n[1] https://www.postgresql.org/message-id/DB7PR09MB2537E18FF90C1C1BBF49D628FD830%40DB7PR09MB2537.eurprd09.prod.outlook.com\r\n[2] https://github.com/postgres/postgres/commit/2910fc8239fa501b662c5459d7ba16a4bc35e7e8\r\n\r\n\r\n(Apologies if my company's footer appears here)\r\n\r\n** Cantab Capital Partners LLP is now named GAM Systematic LLP. Please note that our email addresses have changed from @cantabcapital.com to @gam.com.**\r\n\r\nThis email was sent by and on behalf of GAM Investments. GAM Investments is the corporate brand for GAM Holding AG and its direct and indirect subsidiaries. These companies may be referred to as ‘GAM’ or ‘GAM Investments’. In the United Kingdom, the business of GAM Investments is conducted by GAM (U.K.) Limited (No. 01664573) or one or more entities under the control of GAM (U.K.) Limited, including the following entities authorised and regulated by the Financial Conduct Authority: GAM International Management Limited (No. 01802911), GAM London Limited (No. 00874802), GAM Sterling Management Limited (No. 01750352), GAM Unit Trust Management Company Limited (No. 2873560) and GAM Systematic LLP (No. OC317557). GAM (U.K.) Limited and its regulated entities are registered in England and Wales. The registered office and principal place of business of GAM (U.K.) Limited and its regulated entities is at 8 Finsbury Circus, London, England, EC2M 7GB. The registered office of GAM Systematic LLP is at City House, Hills Road, Cambridge, CB2 1RE. This email, and any attachments, is confidential and may be privileged or otherwise protected from disclosure. It is intended solely for the stated addressee(s) and access to it by any other person is unauthorised. If you are not the intended recipient, you must not disclose, copy, circulate or in any other way use or rely on the information contained herein. If you have received this email in error, please inform us immediately and delete all copies of it. See - https://www.gam.com/en/legal/email-disclosures-eu/ for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. If you cannot access this link, please notify us by reply message and we will send the contents to you. GAM Investments will collect and use information about you in the course of your interactions with us. Full details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice.",
"msg_date": "Thu, 18 Jun 2020 14:22:10 +0000",
"msg_from": "\"Winfield, Steven\" <Steven.Winfield@gam.com>",
"msg_from_op": true,
"msg_subject": "Mark btree_gist functions as PARALLEL SAFE"
},
{
"msg_contents": "\"Winfield, Steven\" <Steven.Winfield@gam.com> writes:\n> On the back of this thread[1] over at pgsql-general, I've attached a patch that marks the functions in btree_gist as PARALLEL SAFE.\n\nCool, please add this patch to the commitfest queue to make sure we\ndon't lose track of it:\n\nhttps://commitfest.postgresql.org/28/\n\n(You'll need to have a community-website login if you don't already)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 11:28:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Mark btree_gist functions as PARALLEL SAFE"
},
{
"msg_contents": "Done - thanks again.\n\nSteven.\n________________________________\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: 18 June 2020 16:28\nTo: Winfield, Steven <Steven.Winfield@gam.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Mark btree_gist functions as PARALLEL SAFE\n\n\"Winfield, Steven\" <Steven.Winfield@gam.com> writes:\n> On the back of this thread[1] over at pgsql-general, I've attached a patch that marks the functions in btree_gist as PARALLEL SAFE.\n\nCool, please add this patch to the commitfest queue to make sure we\ndon't lose track of it:\n\nhttps://urldefense.proofpoint.com/v2/url?u=https-3A__commitfest.postgresql.org_28_&d=DwIFAg&c=55O-mPK0zNHOMgGDdj4__Q&r=JVb896wEUUvWD7VB-jCoEHWXWQRzU6xqZ3aOcIepVzQ&m=oE70Ndze7YukhfkcIi70pjHsKB2zAgJuNkcyEXLkSso&s=G4QLL-xXQrC_txQXOnZGH8bD5kq6sOPh5BY-DyWA9fw&e=\n\n(You'll need to have a community-website login if you don't already)\n\n regards, tom lane\n\n** Cantab Capital Partners LLP is now named GAM Systematic LLP. Please note that our email addresses have changed from @cantabcapital.com to @gam.com.**\n\nThis email was sent by and on behalf of GAM Investments. GAM Investments is the corporate brand for GAM Holding AG and its direct and indirect subsidiaries. These companies may be referred to as 'GAM' or 'GAM Investments'. In the United Kingdom, the business of GAM Investments is conducted by GAM (U.K.) Limited (No. 01664573) or one or more entities under the control of GAM (U.K.) Limited, including the following entities authorised and regulated by the Financial Conduct Authority: GAM International Management Limited (No. 01802911), GAM London Limited (No. 00874802), GAM Sterling Management Limited (No. 01750352), GAM Unit Trust Management Company Limited (No. 2873560) and GAM Systematic LLP (No. OC317557). GAM (U.K.) Limited and its regulated entities are registered in England and Wales. The registered office and principal place of business of GAM (U.K.) Limited and its regulated entities is at 8 Finsbury Circus, London, England, EC2M 7GB. The registered office of GAM Systematic LLP is at City House, Hills Road, Cambridge, CB2 1RE. This email, and any attachments, is confidential and may be privileged or otherwise protected from disclosure. It is intended solely for the stated addressee(s) and access to it by any other person is unauthorised. If you are not the intended recipient, you must not disclose, copy, circulate or in any other way use or rely on the information contained herein. If you have received this email in error, please inform us immediately and delete all copies of it. See - https://www.gam.com/en/legal/email-disclosures-eu/ for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. If you cannot access this link, please notify us by reply message and we will send the contents to you. GAM Investments will collect and use information about you in the course of your interactions with us. Full details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice.\n\n\n\n\n\n\n\n\nDone - thanks again.\n\n\n\n\nSteven.\n\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: 18 June 2020 16:28\nTo: Winfield, Steven <Steven.Winfield@gam.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Mark btree_gist functions as PARALLEL SAFE\n \n\n\n\"Winfield, Steven\" <Steven.Winfield@gam.com> writes:\n> On the back of this thread[1] over at pgsql-general, I've attached a patch that marks the functions in btree_gist as PARALLEL SAFE.\n\nCool, please add this patch to the commitfest queue to make sure we\ndon't lose track of it:\n\nhttps://urldefense.proofpoint.com/v2/url?u=https-3A__commitfest.postgresql.org_28_&d=DwIFAg&c=55O-mPK0zNHOMgGDdj4__Q&r=JVb896wEUUvWD7VB-jCoEHWXWQRzU6xqZ3aOcIepVzQ&m=oE70Ndze7YukhfkcIi70pjHsKB2zAgJuNkcyEXLkSso&s=G4QLL-xXQrC_txQXOnZGH8bD5kq6sOPh5BY-DyWA9fw&e=\n\n\n(You'll need to have a community-website login if you don't already)\n\n regards, tom lane\n\n\n\n** Cantab Capital Partners LLP is now named GAM Systematic LLP. Please note that our email addresses have changed from @cantabcapital.com to @gam.com.**\n\n\nThis email was sent by and on behalf of GAM Investments. GAM Investments is the corporate brand for GAM Holding AG and its direct and indirect subsidiaries. These companies may be referred to as ‘GAM’ or ‘GAM Investments’. In the United Kingdom, the business\n of GAM Investments is conducted by GAM (U.K.) Limited (No. 01664573) or one or more entities under the control of GAM (U.K.) Limited, including the following entities authorised and regulated by the Financial Conduct Authority: GAM International Management\n Limited (No. 01802911), GAM London Limited (No. 00874802), GAM Sterling Management Limited (No. 01750352), GAM Unit Trust Management Company Limited (No. 2873560) and GAM Systematic LLP (No. OC317557). GAM (U.K.) Limited and its regulated entities are registered\n in England and Wales. The registered office and principal place of business of GAM (U.K.) Limited and its regulated entities is at 8 Finsbury Circus, London, England, EC2M 7GB. The registered office of GAM Systematic LLP is at City House, Hills Road, Cambridge,\n CB2 1RE. This email, and any attachments, is confidential and may be privileged or otherwise protected from disclosure. It is intended solely for the stated addressee(s) and access to it by any other person is unauthorised. If you are not the intended recipient,\n you must not disclose, copy, circulate or in any other way use or rely on the information contained herein. If you have received this email in error, please inform us immediately and delete all copies of it. See - https://www.gam.com/en/legal/email-disclosures-eu/\n for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. If you cannot access this link, please notify us\n by reply message and we will send the contents to you. GAM Investments will collect and use information about you in the course of your interactions with us. Full details about the data types we collect and what we use this for and your related rights is set\n out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice.",
"msg_date": "Thu, 18 Jun 2020 16:47:29 +0000",
"msg_from": "\"Winfield, Steven\" <Steven.Winfield@gam.com>",
"msg_from_op": true,
"msg_subject": "Re: Mark btree_gist functions as PARALLEL SAFE"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 7:48 PM Winfield, Steven\n<Steven.Winfield@gam.com> wrote:\n> Done - thanks again.\n\nThis patch looks good to me.\n\nI've rechecked it marks all the functions as parallel safe by\ninstalling an extension and querying the catalog. I've also rechecked\nthat there is nothing suspicious in these functions in terms of\nparallel safety. I did just minor adjustments in migration script\ncomments.\n\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 15 Jul 2020 15:26:24 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Mark btree_gist functions as PARALLEL SAFE"
},
{
"msg_contents": "On Wed, Jul 15, 2020 at 03:26:24PM +0300, Alexander Korotkov wrote:\n> On Thu, Jun 18, 2020 at 7:48 PM Winfield, Steven\n> <Steven.Winfield@gam.com> wrote:\n> > Done - thanks again.\n> \n> This patch looks good to me.\n> \n> I'm going to push this if no objections.\n\nI marked as committed to make patch checker look healthier.\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Mon, 20 Jul 2020 07:18:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Mark btree_gist functions as PARALLEL SAFE"
},
{
"msg_contents": "On Mon, Jul 20, 2020 at 3:18 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Jul 15, 2020 at 03:26:24PM +0300, Alexander Korotkov wrote:\n> > On Thu, Jun 18, 2020 at 7:48 PM Winfield, Steven\n> > <Steven.Winfield@gam.com> wrote:\n> > > Done - thanks again.\n> >\n> > This patch looks good to me.\n> >\n> > I'm going to push this if no objections.\n>\n> I marked as committed to make patch checker look healthier.\n>\n\nThank you!\n\n------\nRegards,\nAlexander Korotkov\n\nOn Mon, Jul 20, 2020 at 3:18 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Jul 15, 2020 at 03:26:24PM +0300, Alexander Korotkov wrote:\n> On Thu, Jun 18, 2020 at 7:48 PM Winfield, Steven\n> <Steven.Winfield@gam.com> wrote:\n> > Done - thanks again.\n> \n> This patch looks good to me.\n> \n> I'm going to push this if no objections.\n\nI marked as committed to make patch checker look healthier.Thank you!------Regards,Alexander Korotkov",
"msg_date": "Mon, 20 Jul 2020 15:38:32 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Mark btree_gist functions as PARALLEL SAFE"
}
] |
[
{
"msg_contents": "Hi Maxim,\n\nCoverity show that trap (1), still persists, even in HEAD.\nCID 1425439 (#1 of 1): Explicit null dereferenced (FORWARD_NULL)\n15. var_deref_model: Passing null pointer expr->expr_simple_state to\nExecEvalExpr, which dereferences it. [show details\n<https://scan6.coverity.com/eventId=30547522-25&modelId=30547522-1&fileInstanceId=101889282&filePath=%2Fdll%2Fpostgres%2Fsrc%2Finclude%2Fexecutor%2Fexecutor.h&fileStart=290&fileEnd=295>]\n\n\nIs really, it is very difficult to provide a minimum reproducible test case?\nI tried, without success.\n\nregards,\nRanier Vilela\n\n1.\nhttps://www.postgresql.org/message-id/20160330070414.8944.52106%40wrigleys.postgresql.org\n\nHi Maxim,Coverity show that trap (1), still persists, even in HEAD.\nCID 1425439 (#1 of 1): Explicit null dereferenced (FORWARD_NULL)15. var_deref_model: Passing null pointer expr->expr_simple_state to ExecEvalExpr, which dereferences it. [show details] Is really, it is very difficult to provide a minimum reproducible test case?I tried, without success.regards,Ranier Vilela1. https://www.postgresql.org/message-id/20160330070414.8944.52106%40wrigleys.postgresql.org",
"msg_date": "Thu, 18 Jun 2020 17:00:05 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "re: BUG #14053: postgres crashes in plpgsql under load of concurrent\n transactions"
}
] |
[
{
"msg_contents": "Hi\n\nsome czech user reported a broken database when he used a repair point on\nMicrosoft Windows.\n\nThe reply from Microsoft was, so it is not a Microsoft issue, but Postgres\nissue, because Postgres doesn't handle VSS process correctly, and then\nrestore point can restore some parts of Postgres's data too.\n\nhttps://docs.microsoft.com/en-us/windows-server/storage/file-server/volume-shadow-copy-service\n\nRegards\n\nPavel\n\nHisome czech user reported a broken database when he used a repair point on Microsoft Windows. The reply from Microsoft was, so it is not a Microsoft issue, but Postgres issue, because Postgres doesn't handle VSS process correctly, and then restore point can restore some parts of Postgres's data too.https://docs.microsoft.com/en-us/windows-server/storage/file-server/volume-shadow-copy-serviceRegardsPavel",
"msg_date": "Fri, 19 Jun 2020 07:19:16 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "missing support for Microsoft VSS Writer"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 7:20 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> some czech user reported a broken database when he used a repair point on\n> Microsoft Windows.\n>\n> The reply from Microsoft was, so it is not a Microsoft issue, but Postgres\n> issue, because Postgres doesn't handle VSS process correctly, and then\n> restore point can restore some parts of Postgres's data too.\n>\n>\n> https://docs.microsoft.com/en-us/windows-server/storage/file-server/volume-shadow-copy-service\n>\n\nThat is pretty much trying to do file system level backup [1], and the same\ncaveats apply. Most likely, anything but a repair point with a shutdown\nserver will result in a broken database when restored.\n\nIn order to get a consistent copy there has to be a specific Postgres\nWriter [2]. There are some samples on how to use the interface with the\nVolume Shadow Copy Service [3], but I do not think there is an interface to\na file-system consistent snapshot in any other system.\n\n[1] https://www.postgresql.org/docs/current/backup-file.html\n[2] https://docs.microsoft.com/en-us/windows/win32/vss/writers\n[3]\nhttps://github.com/microsoft/Windows-classic-samples/tree/master/Samples/VolumeShadowCopyServiceWriter\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Jun 19, 2020 at 7:20 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:some czech user reported a broken database when he used a repair point on Microsoft Windows. The reply from Microsoft was, so it is not a Microsoft issue, but Postgres issue, because Postgres doesn't handle VSS process correctly, and then restore point can restore some parts of Postgres's data too.https://docs.microsoft.com/en-us/windows-server/storage/file-server/volume-shadow-copy-serviceThat is pretty much trying to do file system level backup [1], and the same caveats apply. Most likely, anything but a repair point with a shutdown server will result in a broken database when restored.In order to get a consistent copy there has to be a specific Postgres Writer [2]. There are some samples on how to use the interface with the Volume Shadow Copy Service [3], but I do not think there is an interface to a file-system consistent snapshot in any other system.[1] https://www.postgresql.org/docs/current/backup-file.html [2] https://docs.microsoft.com/en-us/windows/win32/vss/writers[3] https://github.com/microsoft/Windows-classic-samples/tree/master/Samples/VolumeShadowCopyServiceWriterRegards,Juan José Santamaría Flecha",
"msg_date": "Fri, 19 Jun 2020 14:12:32 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing support for Microsoft VSS Writer"
}
] |
[
{
"msg_contents": "When you switch a built REL_11_STABLE or earlier to REL_12_STABLE or \nlater, you get during make install (or, therefore, make check) an error\n\ncp ./*.h 'PREFIX/include/server'/\ncp: cannot stat './dynloader.h': No such file or directory\n\nThis is because version 11 and earlier created src/include/dynloader.h \nas a symlink during configure, but this was changed in version 12, and \nthe target that the symlink points to is no longer there in that branch.\n\nThis has been known for some time, and I figured the issue would go \naway, but people keep complaining to me, so maybe a simple fix could be \napplied.\n\nEven if it's quite late to fix this, it's perhaps worth establishing a \nprincipled solution, in case we ever change any of the other symlinks \ncurrently created by configure.\n\nIt is worth noting that half the problem is that the cp command uses \nwildcards, where in a puristic situation all the files would be listed \nexplicitly and any extra files left around would not pose problems. \nHowever, it seems generally bad to leave broken symlinks lying around, \nsince that can also trip up other tools.\n\nMy proposed fix is to apply this patch:\n\ndiff --git a/configure.in b/configure.in\nindex 7d63eb2fa3..84221690e0 100644\n--- a/configure.in\n+++ b/configure.in\n@@ -2474,7 +2474,10 @@ AC_CONFIG_LINKS([\n src/backend/port/pg_shmem.c:${SHMEM_IMPLEMENTATION}\n src/include/pg_config_os.h:src/include/port/${template}.h\n src/Makefile.port:src/makefiles/Makefile.${template}\n-])\n+], [],\n+[# Remove links created by old versions of configure, so that there\n+# are no broken symlinks in the tree\n+rm -f src/include/dynloader.h])\n\n if test \"$PORTNAME\" = \"win32\"; then\n AC_CONFIG_COMMANDS([check_win32_symlinks],[\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 10:02:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "doing something about the broken dynloader.h symlink"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 8:02 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> +[# Remove links created by old versions of configure, so that there\n> +# are no broken symlinks in the tree\n> +rm -f src/include/dynloader.h])\n\n+1\n\n\n",
"msg_date": "Fri, 19 Jun 2020 22:08:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doing something about the broken dynloader.h symlink"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 10:08:05PM +1200, Thomas Munro wrote:\n> On Fri, Jun 19, 2020 at 8:02 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> +[# Remove links created by old versions of configure, so that there\n>> +# are no broken symlinks in the tree\n>> +rm -f src/include/dynloader.h])\n> \n> +1\n\nNot sure about your suggested patch, but +1 for doing something.\n--\nMichael",
"msg_date": "Fri, 19 Jun 2020 19:17:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doing something about the broken dynloader.h symlink"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 12:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Jun 19, 2020 at 8:02 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > +[# Remove links created by old versions of configure, so that there\n> > +# are no broken symlinks in the tree\n> > +rm -f src/include/dynloader.h])\n>\n> +1\n\n+1\n\n\n",
"msg_date": "Fri, 19 Jun 2020 13:38:24 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doing something about the broken dynloader.h symlink"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> When you switch a built REL_11_STABLE or earlier to REL_12_STABLE or \n> later, you get during make install (or, therefore, make check) an error\n\nWhile I don't necessarily object to the hack you propose here, it seems\nto me that really we need to caution people more strongly about not just\ncavalierly checking out a different branch in the same build directory\nwithout fully cleaning up built files (that is, \"git clean -dfx\" or the\nlike). There are other gotchas that that creates, and I don't want to\nestablish a precedent that it's our job to work around them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jun 2020 10:00:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doing something about the broken dynloader.h symlink"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 11:38 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Fri, Jun 19, 2020 at 12:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Fri, Jun 19, 2020 at 8:02 PM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com> wrote:\n> > > +[# Remove links created by old versions of configure, so that there\n> > > +# are no broken symlinks in the tree\n> > > +rm -f src/include/dynloader.h])\n> >\n> > +1\n>\n> +1\n\n+1\n\n\n",
"msg_date": "Mon, 15 Feb 2021 14:19:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doing something about the broken dynloader.h symlink"
}
] |
[
{
"msg_contents": "I want to maintain an internal table which the primary key is sql_text and\nplanstmt::text, it is efficient since it both may be very long. So a\ngeneral\nidea is to use sql_hash_value and plan_hash_value. Then we have to\nhandle the hash collision case. However I checked the codes both in\nsr_plans[1]\nand pg_stat_statements[2], both of them didn't handle such cases, IIUC. so\nhow can I understand this situation?\n\n\n[1]\nhttps://github.com/postgrespro/sr_plan/blob/41d96bf136ec072dac77dddf8d9765bba39190ff/sr_plan.c#L383\n[2]\nhttps://github.com/postgres/postgres/blob/master/contrib/pg_stat_statements/pg_stat_statements.c#L154\n\n\n-- \nBest Regards\nAndy Fan\n\nI want to maintain an internal table which the primary key is sql_text and planstmt::text, it is efficient since it both may be very long. So a generalidea is to use sql_hash_value and plan_hash_value. Then we have tohandle the hash collision case. However I checked the codes both in sr_plans[1]and pg_stat_statements[2], both of them didn't handle such cases, IIUC. sohow can I understand this situation? [1] https://github.com/postgrespro/sr_plan/blob/41d96bf136ec072dac77dddf8d9765bba39190ff/sr_plan.c#L383[2] https://github.com/postgres/postgres/blob/master/contrib/pg_stat_statements/pg_stat_statements.c#L154 -- Best RegardsAndy Fan",
"msg_date": "Fri, 19 Jun 2020 16:24:01 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "hash as an search key and hash collision"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 04:24:01PM +0800, Andy Fan wrote:\n>I want to maintain an internal table which the primary key is sql_text and\n>planstmt::text, it is efficient since it both may be very long. So a\n>general\n>idea is to use sql_hash_value and plan_hash_value. Then we have to\n>handle the hash collision case. However I checked the codes both in\n>sr_plans[1]\n>and pg_stat_statements[2], both of them didn't handle such cases, IIUC. so\n>how can I understand this situation?\n>\n\nIIRC pg_stat_statements simply accepts the hash collision risk. This is\nwhat the docs say:\n\n In some cases, queries with visibly different texts might get merged\n into a single pg_stat_statements entry. Normally this will happen\n only for semantically equivalent queries, but there is a small\n chance of hash collisions causing unrelated queries to be merged\n into one entry. (This cannot happen for queries belonging to\n different users or databases, however.)\n\nThe consequences of a hash collision are relatively harmless, enough to\nmake it not worth the extra checks (e.g. because the SQL text may not be\navailable in memory and would need to be read from the file).\n\nI suppose sr_plan does the same thing, but I haven't checked.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 18:34:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: hash as an search key and hash collision"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 12:34 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Fri, Jun 19, 2020 at 04:24:01PM +0800, Andy Fan wrote:\n> >I want to maintain an internal table which the primary key is sql_text and\n> >planstmt::text, it is efficient since it both may be very long. So a\n> >general\n> >idea is to use sql_hash_value and plan_hash_value. Then we have to\n> >handle the hash collision case. However I checked the codes both in\n> >sr_plans[1]\n> >and pg_stat_statements[2], both of them didn't handle such cases, IIUC.\n> so\n> >how can I understand this situation?\n> >\n>\n> IIRC pg_stat_statements simply accepts the hash collision risk. This is\n> what the docs say:\n>\n> In some cases, queries with visibly different texts might get merged\n> into a single pg_stat_statements entry. Normally this will happen\n> only for semantically equivalent queries, but there is a small\n> chance of hash collisions causing unrelated queries to be merged\n> into one entry. (This cannot happen for queries belonging to\n> different users or databases, however.)\n>\n> The consequences of a hash collision are relatively harmless, enough to\n> make it not worth the extra checks (e.g. because the SQL text may not be\n> available in memory and would need to be read from the file).\n>\n\nI see. Thank you for this information, this does make sense.\n\nI suppose sr_plan does the same thing, but I haven't checked.\n>\n\nsr_plans is used to map a sql hash value to a PlannedStmts, if hash\ncollisions\nhappen, it may execute a query B while the user wants to execute Query A.\nthis should be more sensitive than pg_stat_statements which doesn't require\nexact data. I added Ildus who is the author of sr_plan to the cc list\nin case he wants to take a look.\n\n-- \nBest Regards\nAndy Fan\n\nOn Sat, Jun 20, 2020 at 12:34 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Fri, Jun 19, 2020 at 04:24:01PM +0800, Andy Fan wrote:\n>I want to maintain an internal table which the primary key is sql_text and\n>planstmt::text, it is efficient since it both may be very long. So a\n>general\n>idea is to use sql_hash_value and plan_hash_value. Then we have to\n>handle the hash collision case. However I checked the codes both in\n>sr_plans[1]\n>and pg_stat_statements[2], both of them didn't handle such cases, IIUC. so\n>how can I understand this situation?\n>\n\nIIRC pg_stat_statements simply accepts the hash collision risk. This is\nwhat the docs say:\n\n In some cases, queries with visibly different texts might get merged\n into a single pg_stat_statements entry. Normally this will happen\n only for semantically equivalent queries, but there is a small\n chance of hash collisions causing unrelated queries to be merged\n into one entry. (This cannot happen for queries belonging to\n different users or databases, however.)\n\nThe consequences of a hash collision are relatively harmless, enough to\nmake it not worth the extra checks (e.g. because the SQL text may not be\navailable in memory and would need to be read from the file).I see. Thank you for this information, this does make sense. \nI suppose sr_plan does the same thing, but I haven't checked.sr_plans is used to map a sql hash value to a PlannedStmts, if hash collisionshappen, it may execute a query B while the user wants to execute Query A.this should be more sensitive than pg_stat_statements which doesn't requireexact data. I added Ildus who is the author of sr_plan to the cc list in case he wants to take a look. -- Best RegardsAndy Fan",
"msg_date": "Sat, 20 Jun 2020 07:33:39 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: hash as an search key and hash collision"
}
] |
[
{
"msg_contents": "At \n<https://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL_Standard#Obsolete_syntax_for_substring.28.29> \nit is described that the substring pattern matching syntax in PostgreSQL \ndoes not conform to the current standard. PostgreSQL implements\n\n SUBSTRING(text FROM pattern FOR escapechar)\n\nwhereas the current standard says\n\n SUBSTRING(text SIMILAR pattern ESCAPE escapechar)\n\nThe former was in SQL99, but the latter has been there since SQL:2003.\n\nIt's pretty easy to implement the second form also, so here is a patch \nthat does that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 19 Jun 2020 11:42:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "update substring pattern matching syntax"
},
{
"msg_contents": "pá 19. 6. 2020 v 11:42 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> At\n> <\n> https://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL_Standard#Obsolete_syntax_for_substring.28.29>\n>\n> it is described that the substring pattern matching syntax in PostgreSQL\n> does not conform to the current standard. PostgreSQL implements\n>\n> SUBSTRING(text FROM pattern FOR escapechar)\n>\n> whereas the current standard says\n>\n> SUBSTRING(text SIMILAR pattern ESCAPE escapechar)\n>\n> The former was in SQL99, but the latter has been there since SQL:2003.\n>\n> It's pretty easy to implement the second form also, so here is a patch\n> that does that.\n>\n\n+1\n\nPavel\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npá 19. 6. 2020 v 11:42 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:At \n<https://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL_Standard#Obsolete_syntax_for_substring.28.29> \nit is described that the substring pattern matching syntax in PostgreSQL \ndoes not conform to the current standard. PostgreSQL implements\n\n SUBSTRING(text FROM pattern FOR escapechar)\n\nwhereas the current standard says\n\n SUBSTRING(text SIMILAR pattern ESCAPE escapechar)\n\nThe former was in SQL99, but the latter has been there since SQL:2003.\n\nIt's pretty easy to implement the second form also, so here is a patch \nthat does that.+1Pavel\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 19 Jun 2020 12:25:50 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: update substring pattern matching syntax"
},
{
"msg_contents": "On 6/19/20 11:42 AM, Peter Eisentraut wrote:\n> At\n> <https://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL_Standard#Obsolete_syntax_for_substring.28.29>\n> it is described that the substring pattern matching syntax in PostgreSQL\n> does not conform to the current standard. PostgreSQL implements\n> \n> SUBSTRING(text FROM pattern FOR escapechar)\n> \n> whereas the current standard says\n> \n> SUBSTRING(text SIMILAR pattern ESCAPE escapechar)\n> \n> The former was in SQL99, but the latter has been there since SQL:2003.\n> \n> It's pretty easy to implement the second form also, so here is a patch\n> that does that.\n\n\nOh good, this was on my list (I added that item to the wiki).\n\nThe patches look straightforward to me. The grammar cleanup patch makes\nthings easier to read indeed. At first I didn't see a test left over\nfor the old syntax, but it's there so this is all LGTM.\n\nThanks for doing this!\n-- \nVik Fearing\n\n\n",
"msg_date": "Sat, 20 Jun 2020 00:03:31 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: update substring pattern matching syntax"
},
{
"msg_contents": "\nHello Peter,\n\n> whereas the current standard says\n>\n> SUBSTRING(text SIMILAR pattern ESCAPE escapechar)\n>\n> The former was in SQL99, but the latter has been there since SQL:2003.\n>\n> It's pretty easy to implement the second form also, so here is a patch that \n> does that.\n\nPatches apply cleanly, compile and \"make check\" is ok. doc gen is ok as \nwell.\n\nGrammar cleanup is a definite improvement as it makes the grammar closer \nto the actual syntax.\n\nI cannot say I'm a fan of this kind of keywords added for some arguments. \nI guess that it allows distinguishing between variants. I do not have the \nstandard at hand: I wanted to check whether these keywords could be \nreordered, i.e. whether SUBSTRING(text ESCAPE ec SIMILAR part) was legal. \nI guess not.\n\nMaybe the doc could advertise more systematically whether a features \nconforms fully or partially to some SQL standards, or is pg specific. The \nadded documentation refers both to SQL:1999 and SQL99. I'd suggest to \nchose one, possibly the former, and use it everywhere consistently.\n\nIt seems that two instances where not updated to the new syntax, see in \n./src/backend/catalog/information_schema.sql and \n./contrib/citext/sql/citext.sql.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 20 Jun 2020 09:08:06 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: update substring pattern matching syntax"
},
{
"msg_contents": "On 2020-06-20 09:08, Fabien COELHO wrote:\n> I cannot say I'm a fan of this kind of keywords added for some arguments.\n> I guess that it allows distinguishing between variants. I do not have the\n> standard at hand: I wanted to check whether these keywords could be\n> reordered, i.e. whether SUBSTRING(text ESCAPE ec SIMILAR part) was legal.\n> I guess not.\n\nIt is not.\n\n> Maybe the doc could advertise more systematically whether a features\n> conforms fully or partially to some SQL standards, or is pg specific.\n\nI think that would be useful, but it's probably a broader topic than \njust for this specific function.\n\n> The\n> added documentation refers both to SQL:1999 and SQL99. I'd suggest to\n> chose one, possibly the former, and use it everywhere consistently.\n\nfixed\n\n> It seems that two instances where not updated to the new syntax, see in\n> ./src/backend/catalog/information_schema.sql and\n> ./contrib/citext/sql/citext.sql.\n\ndone\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 27 Jun 2020 11:07:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: update substring pattern matching syntax"
},
{
"msg_contents": "\nHallo Peter,\n\nv2 patches apply cleanly, compile, global check ok, citext check ok, doc \ngen ok. No further comments.\n\nAs I did not find an entry in the CF, so I did nothing about tagging it \n\"ready\".\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 28 Jun 2020 08:13:47 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: update substring pattern matching syntax"
},
{
"msg_contents": "On 2020-06-28 08:13, Fabien COELHO wrote:\n> v2 patches apply cleanly, compile, global check ok, citext check ok, doc\n> gen ok. No further comments.\n\ncommitted, thanks\n\n> As I did not find an entry in the CF, so I did nothing about tagging it\n> \"ready\".\n\nRight, I had not registered it yet.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 29 Jun 2020 11:58:11 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: update substring pattern matching syntax"
}
] |
[
{
"msg_contents": "Hello,\n\n\nThe mails that I get today from pgsql-committers contain links (as \nusual) to git.postgresql.org\nbut these links don't seem to give valid pages: I get what looks like a \ngitweb page but with '404 - Unknown commit object '\n\nexample:\nhttps://git.postgresql.org/pg/commitdiff/15cb2bd27009f73a84a35c2ba60fdd105b4bf263\n\n\nAnd I can git-pull without error but nothing more recent than this:\n\n-----------------------\ncommit ae3259c55067c926d25c745d70265fca15c2d26b\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Fri Jun 19 16:46:07 2020 -0400\n\n Ensure write failure reports no-disk-space\n\n A few places calling fwrite and gzwrite were not setting errno to \nENOSPC\n when reporting errors, as is customary; this led to some failures \nbeing\n-----------------------\n\nI don't know exactly where things are going wrong - could even be local \nhere but I don't think so.. do others see the same thing?\n\n\nThanks,\n\n\nErik Rijkers\n\n\n\n\n\n",
"msg_date": "Sat, 20 Jun 2020 14:32:34 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "git.postgresql.org ok?"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 6:02 PM Erik Rijkers <er@xs4all.nl> wrote:\n>\n>\n> I don't know exactly where things are going wrong - could even be local\n> here but I don't think so.. do others see the same thing?\n>\n\nYes, I am also facing the same problem.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 20 Jun 2020 19:42:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: git.postgresql.org ok?"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 3:32 PM Erik Rijkers <er@xs4all.nl> wrote:\n> The mails that I get today from pgsql-committers contain links (as\n> usual) to git.postgresql.org\n> but these links don't seem to give valid pages: I get what looks like a\n> gitweb page but with '404 - Unknown commit object '\n>\n> example:\n> https://git.postgresql.org/pg/commitdiff/15cb2bd27009f73a84a35c2ba60fdd105b4bf263\n\nI've also discovered similar issues. It seems that new commit appears\nat https://git.postgresql.org, but with delay. Commit notifications to\npgsql-committers also seem to work with delays.\n\n> And I can git-pull without error but nothing more recent than this:\n\nI've discovered timeouts while accessing gitmaster.postgresql.org\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 20 Jun 2020 17:46:12 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: git.postgresql.org ok?"
},
{
"msg_contents": "On 6/20/20 4:46 PM, Alexander Korotkov wrote:\n> On Sat, Jun 20, 2020 at 3:32 PM Erik Rijkers <er@xs4all.nl> wrote:\n>> The mails that I get today from pgsql-committers contain links (as\n>> usual) to git.postgresql.org\n>> but these links don't seem to give valid pages: I get what looks like a\n>> gitweb page but with '404 - Unknown commit object '\n>>\n>> example:\n>> https://git.postgresql.org/pg/commitdiff/15cb2bd27009f73a84a35c2ba60fdd105b4bf263\n> \n> I've also discovered similar issues. It seems that new commit appears\n> at https://git.postgresql.org, but with delay. Commit notifications to\n> pgsql-committers also seem to work with delays.\n> \n>> And I can git-pull without error but nothing more recent than this:\n> \n> I've discovered timeouts while accessing gitmaster.postgresql.org\n\nthe root issue should be fixed as of a few minutes ago but it might take\na bit until everything is synced up again.\n\n\nsorry for the inconvinience :/\n\n\nStefan\n\n\n",
"msg_date": "Sat, 20 Jun 2020 17:08:21 +0200",
"msg_from": "Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>",
"msg_from_op": false,
"msg_subject": "Re: git.postgresql.org ok?"
},
{
"msg_contents": "On 2020-06-20 17:08, Stefan Kaltenbrunner wrote:\n> the root issue should be fixed as of a few minutes ago but it might \n> take\n> a bit until everything is synced up again.\n> \n> \n\nThanks!\n\n\n\n\n",
"msg_date": "Sat, 20 Jun 2020 17:13:49 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: git.postgresql.org ok?"
},
{
"msg_contents": "On Sat, Jun 20, 2020 at 6:08 PM Stefan Kaltenbrunner\n<stefan@kaltenbrunner.cc> wrote:\n> On 6/20/20 4:46 PM, Alexander Korotkov wrote:\n> > On Sat, Jun 20, 2020 at 3:32 PM Erik Rijkers <er@xs4all.nl> wrote:\n> >> The mails that I get today from pgsql-committers contain links (as\n> >> usual) to git.postgresql.org\n> >> but these links don't seem to give valid pages: I get what looks like a\n> >> gitweb page but with '404 - Unknown commit object '\n> >>\n> >> example:\n> >> https://git.postgresql.org/pg/commitdiff/15cb2bd27009f73a84a35c2ba60fdd105b4bf263\n> >\n> > I've also discovered similar issues. It seems that new commit appears\n> > at https://git.postgresql.org, but with delay. Commit notifications to\n> > pgsql-committers also seem to work with delays.\n> >\n> >> And I can git-pull without error but nothing more recent than this:\n> >\n> > I've discovered timeouts while accessing gitmaster.postgresql.org\n>\n> the root issue should be fixed as of a few minutes ago but it might take\n> a bit until everything is synced up again.\n>\n>\n> sorry for the inconvinience :/\n\nNo problem. Thank you!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 20 Jun 2020 18:15:46 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: git.postgresql.org ok?"
}
] |
[
{
"msg_contents": "Hi Mark,\nplease, can you take a look?\n\nThis possible bug was appeared before, see at:\n1. https://bugzilla.redhat.com/show_bug.cgi?id=879803\n\nThe trap still persist, in HEAD see:\n\nsrc/interfaces/libpq/fe-exec.c (line 563)\n/* If there's enough space in the current block, no problem. */\nif (nBytes <= (size_t) res->spaceLeft)\n{\n space = res->curBlock->space + res->curOffset;\n res->curOffset += nBytes;\n res->spaceLeft -= nBytes;\n\n return space;\n}\n\nThe res->curBlock pointer possibly, can be NULL here (line 563).\n\nSee at:\nsrc/interfaces/libpq/fe-exec.c (line 585)\nif (res->curBlock)\n\nThe path is res->curBlock be NULL and res->spaceLeft > nBytes.\n\nIf res->curBlock it not can be NULL, inside pqResultAlloc function, why is\nverified against NULL at line 585?\n\nregards,\nRanier Vilela\n\nHi Mark,please, can you take a look?This possible bug was appeared before, see at:1. https://bugzilla.redhat.com/show_bug.cgi?id=879803The trap still persist, in HEAD see:src/interfaces/libpq/fe-exec.c (line 563)\t/* If there's enough space in the current block, no problem. */\tif (nBytes <= (size_t) res->spaceLeft)\t{\t space = res->curBlock->space + res->curOffset;\t res->curOffset += nBytes;\t res->spaceLeft -= nBytes; \t\treturn space;\t}The \nres->curBlock pointer possibly, can be NULL here (line 563).See at:\nsrc/interfaces/libpq/fe-exec.c (line 585)\n\n\t\tif (res->curBlock)The path is res->curBlock be NULL and res->spaceLeft > nBytes.If res->curBlock it not can be NULL, inside pqResultAlloc function, why is verified against NULL at line 585?regards,Ranier Vilela",
"msg_date": "Sat, 20 Jun 2020 11:07:49 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Possible NULL pointer deferenced (src/interfaces/libpq/fe-exec.c\n (line 563)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> The res->curBlock pointer possibly, can be NULL here (line 563).\n\nNo, it can't.\n\nTo get to that line, nBytes has to be > 0, which means res->spaceLeft\nhas to be > 0, which cannot happen while res->curBlock is NULL.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Jun 2020 01:16:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible NULL pointer deferenced (src/interfaces/libpq/fe-exec.c\n (line 563)"
},
{
"msg_contents": "Em dom., 21 de jun. de 2020 às 02:16, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > The res->curBlock pointer possibly, can be NULL here (line 563).\n>\n> No, it can't.\n>\n> To get to that line, nBytes has to be > 0, which means res->spaceLeft\n> has to be > 0, which cannot happen while res->curBlock is NULL.\n>\nHi Tom, thanks for answer.\n\nregards,\nRanier Vilela\n\nEm dom., 21 de jun. de 2020 às 02:16, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> The res->curBlock pointer possibly, can be NULL here (line 563).\n\nNo, it can't.\n\nTo get to that line, nBytes has to be > 0, which means res->spaceLeft\nhas to be > 0, which cannot happen while res->curBlock is NULL.Hi Tom, thanks for answer.regards,Ranier Vilela",
"msg_date": "Sun, 21 Jun 2020 10:54:09 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible NULL pointer deferenced (src/interfaces/libpq/fe-exec.c\n (line 563)"
}
] |
[
{
"msg_contents": "I suggest to rename enable_incrementalsort to enable_incremental_sort. \nThis is obviously more readable and also how we have named recently \nadded multiword planner parameters.\n\nSee attached patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 21 Jun 2020 08:26:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 8:26 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> I suggest to rename enable_incrementalsort to enable_incremental_sort.\n> This is obviously more readable and also how we have named recently\n> added multiword planner parameters.\n>\n> See attached patch.\n\n+1, this is a way better name (and patch LGTM on REL_13_STABLE).\n\n\n",
"msg_date": "Sun, 21 Jun 2020 09:05:32 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 09:05:32AM +0200, Julien Rouhaud wrote:\n>On Sun, Jun 21, 2020 at 8:26 AM Peter Eisentraut\n><peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> I suggest to rename enable_incrementalsort to enable_incremental_sort.\n>> This is obviously more readable and also how we have named recently\n>> added multiword planner parameters.\n>>\n>> See attached patch.\n>\n>+1, this is a way better name (and patch LGTM on REL_13_STABLE).\n>\n\nThe reason why I kept the single-word variant is consistency with other\nGUCs that affect planning, like enable_indexscan, enable_hashjoin and\nmany others.\n\nThat being said, I'm not particularly attached this choice, so if you\nthink this is better I'm OK with it.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 21 Jun 2020 13:21:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Sun, 21 Jun 2020 at 23:22, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Jun 21, 2020 at 09:05:32AM +0200, Julien Rouhaud wrote:\n> >On Sun, Jun 21, 2020 at 8:26 AM Peter Eisentraut\n> ><peter.eisentraut@2ndquadrant.com> wrote:\n> >>\n> >> I suggest to rename enable_incrementalsort to enable_incremental_sort.\n> >> This is obviously more readable and also how we have named recently\n> >> added multiword planner parameters.\n> >>\n> >> See attached patch.\n> >\n> >+1, this is a way better name (and patch LGTM on REL_13_STABLE).\n> >\n>\n> The reason why I kept the single-word variant is consistency with other\n> GUCs that affect planning, like enable_indexscan, enable_hashjoin and\n> many others.\n\nLooking at the other enable_* GUCs, for all the ones that aim to\ndisable a certain executor node type, with the exception of\nenable_hashagg and enable_bitmapscan, they're all pretty consistent in\nnaming the GUC after the executor node's .c file:\n\nenable_bitmapscan nodeBitmapHeapscan.c\nenable_gathermerge nodeGatherMerge.c\nenable_hashagg nodeAgg.c\nenable_hashjoin nodeHashjoin.c\nenable_incrementalsort nodeIncrementalSort.c\nenable_indexonlyscan nodeIndexonlyscan.c\nenable_indexscan nodeIndexscan.c\nenable_material nodeMaterial.c\nenable_mergejoin nodeMergejoin.c\nenable_nestloop nodeNestloop.c\nenable_parallel_append nodeAppend.c\nenable_parallel_hash nodeHash.c\nenable_partition_pruning\nenable_partitionwise_aggregate\nenable_partitionwise_join\nenable_seqscan nodeSeqscan.c\nenable_sort nodeSort.c\nenable_tidscan nodeTidscan.c\n\nenable_partition_pruning, enable_partitionwise_aggregate,\nenable_partitionwise_join are the odd ones out here as they're not\nreally related to a specific node type.\n\nGoing by that, it does seem the current name for\nenable_incrementalsort is consistent with the majority. Naming it\nenable_incremental_sort looks like it would be more suited if the\nfeature had been added by overloading nodeSort.c. In that regard, it\nwould be similar to enable_parallel_append and enable_parallel_hash,\nwhere the middle word becomes a modifier.\n\nDavid\n\n\n",
"msg_date": "Mon, 22 Jun 2020 11:18:24 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 4:48 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 21 Jun 2020 at 23:22, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Sun, Jun 21, 2020 at 09:05:32AM +0200, Julien Rouhaud wrote:\n> > >On Sun, Jun 21, 2020 at 8:26 AM Peter Eisentraut\n> > ><peter.eisentraut@2ndquadrant.com> wrote:\n> > >>\n> > >> I suggest to rename enable_incrementalsort to enable_incremental_sort.\n> > >> This is obviously more readable and also how we have named recently\n> > >> added multiword planner parameters.\n> > >>\n> > >> See attached patch.\n> > >\n> > >+1, this is a way better name (and patch LGTM on REL_13_STABLE).\n> > >\n> >\n> > The reason why I kept the single-word variant is consistency with other\n> > GUCs that affect planning, like enable_indexscan, enable_hashjoin and\n> > many others.\n>\n> Looking at the other enable_* GUCs, for all the ones that aim to\n> disable a certain executor node type, with the exception of\n> enable_hashagg and enable_bitmapscan, they're all pretty consistent in\n> naming the GUC after the executor node's .c file:\n>\n> enable_bitmapscan nodeBitmapHeapscan.c\n> enable_gathermerge nodeGatherMerge.c\n> enable_hashagg nodeAgg.c\n> enable_hashjoin nodeHashjoin.c\n> enable_incrementalsort nodeIncrementalSort.c\n> enable_indexonlyscan nodeIndexonlyscan.c\n> enable_indexscan nodeIndexscan.c\n> enable_material nodeMaterial.c\n> enable_mergejoin nodeMergejoin.c\n> enable_nestloop nodeNestloop.c\n> enable_parallel_append nodeAppend.c\n> enable_parallel_hash nodeHash.c\n> enable_partition_pruning\n> enable_partitionwise_aggregate\n> enable_partitionwise_join\n> enable_seqscan nodeSeqscan.c\n> enable_sort nodeSort.c\n> enable_tidscan nodeTidscan.c\n>\n> enable_partition_pruning, enable_partitionwise_aggregate,\n> enable_partitionwise_join are the odd ones out here as they're not\n> really related to a specific node type.\n\nThanks for the list. To me it's more of a question about readability\nthan consistency. enable_mergejoin, enable_hashjoin for example are\nreadable even without separating words merge_join or hash_join (many\ntimes I have typed enable_hash_join and cursed :); but that was before\nautocomplete was available). But enable_partitionwiseaggregate does\nnot look much different from enable_abracadabra :). Looking from that\nangle, enable_incremental_sort is better than enable_incrementalsort.\nWe could have named enable_indexonlyscan as enable_index_only_scan for\nbetter readability.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 22 Jun 2020 17:25:42 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 7:22 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> The reason why I kept the single-word variant is consistency with other\n> GUCs that affect planning, like enable_indexscan, enable_hashjoin and\n> many others.\n\nRight, so that makes sense, but from a larger point of view, how much\nsense does it actually make? I mean, I get the argument from tradition\nand from internal naming consistency, but from a user perspective, why\ndoes it makes sense for there to be underscores between some of the\nwords and not others? I think it just feels random, like someone is\ncharging us $1 per underscore so we're economizing.\n\nSo I'm +1 for changing this, and I'd definitely be +1 for renaming the\nothers if they weren't released already, and at least +0.5 for it\nanyhow. It's bad enough that our source code has names_like_this and\nNamesLikeThis and namesLikeThis; when we also start adding\nnames_likethis and NamesLike_this and maybe NaMeS___LiKeTh_is, I kind\nof lose my mind. And avoiding that sort of thing in user-facing stuff\nseems even more important.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Jun 2020 10:16:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 10:16:54AM -0400, Robert Haas wrote:\n>On Sun, Jun 21, 2020 at 7:22 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> The reason why I kept the single-word variant is consistency with other\n>> GUCs that affect planning, like enable_indexscan, enable_hashjoin and\n>> many others.\n>\n>Right, so that makes sense, but from a larger point of view, how much\n>sense does it actually make? I mean, I get the argument from tradition\n>and from internal naming consistency, but from a user perspective, why\n>does it makes sense for there to be underscores between some of the\n>words and not others? I think it just feels random, like someone is\n>charging us $1 per underscore so we're economizing.\n>\n\nSure. I'm not particularly attached to the current GUC, I've only tried\nto explain that the naming was not entirely random. I agree having an\nextra _ in the name would make it more readable.\n\n\n>So I'm +1 for changing this, and I'd definitely be +1 for renaming the\n>others if they weren't released already, and at least +0.5 for it\n>anyhow. It's bad enough that our source code has names_like_this and\n>NamesLikeThis and namesLikeThis; when we also start adding\n>names_likethis and NamesLike_this and maybe NaMeS___LiKeTh_is, I kind\n>of lose my mind. And avoiding that sort of thing in user-facing stuff\n>seems even more important.\n>\n\nOK, challenge accepted. $100 to the first person who commits a patch\nwith a variable NaMeS___LiKeTh_is.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jun 2020 16:31:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Jun 21, 2020 at 7:22 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> The reason why I kept the single-word variant is consistency with other\n>> GUCs that affect planning, like enable_indexscan, enable_hashjoin and\n>> many others.\n\n> Right, so that makes sense, but from a larger point of view, how much\n> sense does it actually make?\n\nMaybe I'm just used to the names, but I find that things like\n\"enable_seqscan\" and \"enable_nestloop\" are pretty readable.\nOnce they get longer, though, not so much. So I agree with\nrenaming enable_incrementalsort.\n\n> So I'm +1 for changing this, and I'd definitely be +1 for renaming the\n> others if they weren't released already, and at least +0.5 for it\n> anyhow.\n\nNah. Those names are way too well entrenched. Besides which, if\nwe open them up for reconsideration, there's going to be a lot of\nbikeshedding done. Should \"enable_seqscan\" become \"enable_seq_scan\",\nor \"enable_sequential_scan\", or maybe \"enable_scan_sequential\"?\nWhy doesn't \"enable_nestloop\" contain the word \"join\"? Etc etc.\n\n(I do have to wonder if maybe this one should be enable_sort_incremental.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jun 2020 10:41:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 10:31 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> OK, challenge accepted. $100 to the first person who commits a patch\n> with a variable NaMeS___LiKeTh_is.\n\n:-)\n\nWell, that was hyperbole, but people have proposed some pretty wacky\nschemes, and a few of those have ended up in the tree. For example we\nhave AtEOXact_PgStat and its close friend AtEOXact_on_commit_actions,\nfor instance, or out_gistxlogDelete, or\nIncrementVarSublevelsUp_rtable, or convert_EXISTS_sublink_to_join. I\nconfess haven't managed to find any plausible examples of underscores\nin the middle of a word yet, and we only have a handful of examples of\ndouble-underscore and none with triple-underscore, but we've got\nnearly every combination of lower-case words, upper-case words,\ninitial-capital words, underscores separating words or not, and words\nabbreviated or not, and it's not hard to find cases where several\ndifferent styles are used in the same identifier. This isn't the end\nof the world or anything, but I think we would be better off if we\ntried to do less of it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Jun 2020 11:13:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 10:41:17AM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Sun, Jun 21, 2020 at 7:22 AM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> >> The reason why I kept the single-word variant is consistency with other\n> >> GUCs that affect planning, like enable_indexscan, enable_hashjoin and\n> >> many others.\n> \n> > Right, so that makes sense, but from a larger point of view, how much\n> > sense does it actually make?\n> \n> Maybe I'm just used to the names, but I find that things like\n> \"enable_seqscan\" and \"enable_nestloop\" are pretty readable.\n> Once they get longer, though, not so much. So I agree with\n> renaming enable_incrementalsort.\n\nI think the big problem is that, without the extra underscore, it reads\nas increment-alsort. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 22 Jun 2020 11:22:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Jun 22, 2020 at 10:41:17AM -0400, Tom Lane wrote:\n>> Maybe I'm just used to the names, but I find that things like\n>> \"enable_seqscan\" and \"enable_nestloop\" are pretty readable.\n>> Once they get longer, though, not so much. So I agree with\n>> renaming enable_incrementalsort.\n\n> I think the big problem is that, without the extra underscore, it reads\n> as increment-alsort. ;-)\n\nYeah, the longer the name gets, the harder it is to see where the\nword boundaries are.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jun 2020 12:16:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 11:22 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I think the big problem is that, without the extra underscore, it reads\n> as increment-alsort. ;-)\n\nI know you're joking, but I think there's a serious issue here. We\noften both omit word separators and also abbreviate, and I doubt that\nthe meaning is always obvious to people whose first language is\nJapanese or Russian or something. The only human language other than\nEnglish in which I have any competence at all is Spanish, and if\nsomebody speaks Spanish to me the way that it's explained in a\ntextbook, I can understand it fairly well, especially if we're talking\nabout the kinds of topics that textbooks discuss rather than technical\nstuff. But as soon as you start to use abbreviations or idioms, you're\ngoing to lose me. Without a doubt, the best solution to this problem\nwould be for me to have better Spanish, but in the absence of that, on\nthose occasions when I need to communicate in Spanish, I sure do like\nit when people are willing and able to make that as easy for me as\nthey can. I suspect other people have similar experiences.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Jun 2020 12:17:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "I think the change makes a lot of sense. The only reason I had it as\nenable_incrementalsort in the first place was trying to broadly\nfollowing the existing GUC names, but as has already been pointed out,\nthere's a lot of variation there, and my version of the patch already\nchanged it to be more readable (at one point it was\nenable_incsort...which is short...but does not have an obvious\nmeaning).\n\nI've attached a patch to make the change, though if people are\ninterested in Tom's suggestion of enable_sort_incremental I could\nswitch to that.\n\nJames",
"msg_date": "Thu, 2 Jul 2020 11:25:33 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
},
{
"msg_contents": "On 2020-07-02 17:25, James Coleman wrote:\n> I think the change makes a lot of sense. The only reason I had it as\n> enable_incrementalsort in the first place was trying to broadly\n> following the existing GUC names, but as has already been pointed out,\n> there's a lot of variation there, and my version of the patch already\n> changed it to be more readable (at one point it was\n> enable_incsort...which is short...but does not have an obvious\n> meaning).\n> \n> I've attached a patch to make the change,\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jul 2020 12:20:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: suggest to rename enable_incrementalsort"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile reviewing a documentation patch, I noticed that a few tags where \nwrong in \"catalog.sgml\". Attached patch fixes them.\n\n-- \nFabien.",
"msg_date": "Sun, 21 Jun 2020 09:10:35 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "tag typos in \"catalog.sgml\""
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 09:10:35AM +0200, Fabien COELHO wrote:\n> While reviewing a documentation patch, I noticed that a few tags where wrong\n> in \"catalog.sgml\". Attached patch fixes them.\n\nGood catches, thanks Fabien. I will fix that tomorrow or so.\n--\nMichael",
"msg_date": "Sun, 21 Jun 2020 19:31:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tag typos in \"catalog.sgml\""
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 07:31:16PM +0900, Michael Paquier wrote:\n> Good catches, thanks Fabien. I will fix that tomorrow or so.\n\nAnd applied to HEAD.\n--\nMichael",
"msg_date": "Mon, 22 Jun 2020 13:48:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tag typos in \"catalog.sgml\""
},
{
"msg_contents": "\n>> Good catches, thanks Fabien. I will fix that tomorrow or so.\n>\n> And applied to HEAD.\n\nOk.\n\nShould it be backpatched? I'm not sure what the usual practice is wrt to \nsmall fixes in the doc.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 22 Jun 2020 21:09:28 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: tag typos in \"catalog.sgml\""
},
{
"msg_contents": "On Mon, Jun 22, 2020 at 09:09:28PM +0200, Fabien COELHO wrote:\n> Should it be backpatched? I'm not sure what the usual practice is wrt to\n> small fixes in the doc.\n\nThe text is right, and this impacts only the appearance of the text,\nso I did not see that this was enough for a backpatch.\n--\nMichael",
"msg_date": "Tue, 23 Jun 2020 10:03:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tag typos in \"catalog.sgml\""
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> Should it be backpatched? I'm not sure what the usual practice is wrt to\n>> small fixes in the doc.\n>\n> The text is right, and this impacts only the appearance of the text,\n> so I did not see that this was enough for a backpatch.\n\nOk. It would mean that possible other doc patches on the same area would \nnot backpatch easily, but the doc is expected to be more or less frozen \nfor a given version. So fine with me.\n\n-- \nFabien.",
"msg_date": "Tue, 23 Jun 2020 06:32:53 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: tag typos in \"catalog.sgml\""
}
] |
[
{
"msg_contents": "\nHello devs,\n\nI've been annoyed that the documentation navigation does not always has an \n\"Up\" link. It has them inside parts, but the link disappears and you have \nto go for the \"Home\" link which is far on the right when on the root page \nof a part?\n\nIs there a good reason not to have the \"Up\" link there as well?\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 21 Jun 2020 09:19:27 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 09:19:27AM +0200, Fabien COELHO wrote:\n> \n> Hello devs,\n> \n> I've been annoyed that the documentation navigation does not always has an\n> \"Up\" link. It has them inside parts, but the link disappears and you have to\n> go for the \"Home\" link which is far on the right when on the root page of a\n> part?\n> \n> Is there a good reason not to have the \"Up\" link there as well?\n\nYes, please. I asked for this feature in December of 2018 but have not\ngotten around to implementing it:\n\n\thttps://www.postgresql.org/message-id/flat/20181231235858.GB3052%40momjian.us\n\nCan someone make this improvement?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 22 Jun 2020 11:26:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "On 2020-06-21 09:19, Fabien COELHO wrote:\n> I've been annoyed that the documentation navigation does not always has an\n> \"Up\" link. It has them inside parts, but the link disappears and you have\n> to go for the \"Home\" link which is far on the right when on the root page\n> of a part?\n> \n> Is there a good reason not to have the \"Up\" link there as well?\n\nThe original stylesheets explicitly go out of their way to do it that \nway. We can easily fix that by removing that special case. See \nattached patch.\n\nThat patch only fixes it for the header. To fix it for the footer as \nwell, we'd first need to import the navfooter template to be able to \ncustomize it. Not a big problem though.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 3 Jul 2020 10:59:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-06-21 09:19, Fabien COELHO wrote:\n>> I've been annoyed that the documentation navigation does not always has an\n>> \"Up\" link. It has them inside parts, but the link disappears and you have\n>> to go for the \"Home\" link which is far on the right when on the root page\n>> of a part?\n>> \n>> Is there a good reason not to have the \"Up\" link there as well?\n\n> The original stylesheets explicitly go out of their way to do it that \n> way.\n\nCan we find any evidence of the reasoning? As you say, that clearly was\nan intentional choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jul 2020 09:45:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "On 2020-Jul-03, Tom Lane wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > On 2020-06-21 09:19, Fabien COELHO wrote:\n> >> I've been annoyed that the documentation navigation does not always has an\n> >> \"Up\" link. It has them inside parts, but the link disappears and you have\n> >> to go for the \"Home\" link which is far on the right when on the root page\n> >> of a part?\n> >> \n> >> Is there a good reason not to have the \"Up\" link there as well?\n> \n> > The original stylesheets explicitly go out of their way to do it that \n> > way.\n> \n> Can we find any evidence of the reasoning? As you say, that clearly was\n> an intentional choice.\n\nIf it helps, this seems to have been first introduced in commit b8691d838be.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jul 2020 15:57:44 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "\n>> The original stylesheets explicitly go out of their way to do it that\n>> way.\n>\n> Can we find any evidence of the reasoning? As you say, that clearly was\n> an intentional choice.\n\nGiven the code, my guess would be the well-intentioned but misplaced \ndesire to avoid a redundancy, i.e. two links side-by-side which point to \nthe same place.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 4 Jul 2020 08:44:03 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "\nHello Peter,\n\n> The original stylesheets explicitly go out of their way to do it that way. \n> We can easily fix that by removing that special case. See attached patch.\n>\n> That patch only fixes it for the header. To fix it for the footer as well, \n> we'd first need to import the navfooter template to be able to customize it.\n\nThanks for the patch, which applies cleanly, doc compiles, works for me \nwith w3m.\n\n> Not a big problem though.\n\nNope, just mildly irritating for quite a long time:-) So I'd go for back \npatching if it applies cleanly.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 4 Jul 2020 08:47:53 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "On Sat, Jul 4, 2020 at 08:47:53AM +0200, Fabien COELHO wrote:\n> \n> Hello Peter,\n> \n> > The original stylesheets explicitly go out of their way to do it that\n> > way. We can easily fix that by removing that special case. See attached\n> > patch.\n> > \n> > That patch only fixes it for the header. To fix it for the footer as\n> > well, we'd first need to import the navfooter template to be able to\n> > customize it.\n> \n> Thanks for the patch, which applies cleanly, doc compiles, works for me with\n> w3m.\n> \n> > Not a big problem though.\n> \n> Nope, just mildly irritating for quite a long time:-) So I'd go for back\n> patching if it applies cleanly.\n\nCan we get Peter's patch for this applied soon? Thanks. Should I apply\nit?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 25 Aug 2020 15:48:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "On 2020-08-25 21:48, Bruce Momjian wrote:\n> On Sat, Jul 4, 2020 at 08:47:53AM +0200, Fabien COELHO wrote:\n>>\n>> Hello Peter,\n>>\n>>> The original stylesheets explicitly go out of their way to do it that\n>>> way. We can easily fix that by removing that special case. See attached\n>>> patch.\n>>>\n>>> That patch only fixes it for the header. To fix it for the footer as\n>>> well, we'd first need to import the navfooter template to be able to\n>>> customize it.\n>>\n>> Thanks for the patch, which applies cleanly, doc compiles, works for me with\n>> w3m.\n>>\n>>> Not a big problem though.\n>>\n>> Nope, just mildly irritating for quite a long time:-) So I'd go for back\n>> patching if it applies cleanly.\n> \n> Can we get Peter's patch for this applied soon? Thanks. Should I apply\n> it?\n\nI have made the analogous changes to the footer as well and committed this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 6 Sep 2020 16:59:11 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "On Sun, Sep 6, 2020 at 04:59:11PM +0200, Peter Eisentraut wrote:\n> On 2020-08-25 21:48, Bruce Momjian wrote:\n> > On Sat, Jul 4, 2020 at 08:47:53AM +0200, Fabien COELHO wrote:\n> > > \n> > > Hello Peter,\n> > > \n> > > > The original stylesheets explicitly go out of their way to do it that\n> > > > way. We can easily fix that by removing that special case. See attached\n> > > > patch.\n> > > > \n> > > > That patch only fixes it for the header. To fix it for the footer as\n> > > > well, we'd first need to import the navfooter template to be able to\n> > > > customize it.\n> > > \n> > > Thanks for the patch, which applies cleanly, doc compiles, works for me with\n> > > w3m.\n> > > \n> > > > Not a big problem though.\n> > > \n> > > Nope, just mildly irritating for quite a long time:-) So I'd go for back\n> > > patching if it applies cleanly.\n> > \n> > Can we get Peter's patch for this applied soon? Thanks. Should I apply\n> > it?\n> \n> I have made the analogous changes to the footer as well and committed this.\n\nI see this only applied to master. Shouldn't this be backpatched?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 8 Sep 2020 15:10:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "On 2020-09-08 21:10, Bruce Momjian wrote:\n> On Sun, Sep 6, 2020 at 04:59:11PM +0200, Peter Eisentraut wrote:\n>> I have made the analogous changes to the footer as well and committed this.\n> \n> I see this only applied to master. Shouldn't this be backpatched?\n\nI wasn't planning to. It's not a bug fix.\n\nOther thoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 10 Sep 2020 16:00:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "\n> On 2020-09-08 21:10, Bruce Momjian wrote:\n>> \n>> I see this only applied to master. Shouldn't this be backpatched?\n>\n> I wasn't planning to. It's not a bug fix.\n>\n> Other thoughts?\n\nYep. ISTM nicer if all docs have the same navigation, especially as \ngoogling often points to random versions. No big deal anyway, in six year \nall supported versions will have a up link on the part level! :-)\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 11 Sep 2020 14:58:51 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
},
{
"msg_contents": "On 2020-09-11 14:58, Fabien COELHO wrote:\n> \n>> On 2020-09-08 21:10, Bruce Momjian wrote:\n>>>\n>>> I see this only applied to master. Shouldn't this be backpatched?\n>>\n>> I wasn't planning to. It's not a bug fix.\n>>\n>> Other thoughts?\n> \n> Yep. ISTM nicer if all docs have the same navigation, especially as\n> googling often points to random versions. No big deal anyway, in six year\n> all supported versions will have a up link on the part level! :-)\n\nOkay, backpatched to PG10.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 12 Sep 2020 20:37:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing \"Up\" navigation link between parts and doc root?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.